Giter Club home page Giter Club logo

trifecta's Introduction

Trifecta

Trifecta is a web-based and Command Line Interface (CLI) tool that enables users to quickly and easily inspect, verify and even query Kafka messages. In addition, Trifecta offers data import/export functions for transferring data between Kafka topics and many other Big Data Systems (including Cassandra, ElasticSearch, MongoDB and others).

Table of Contents

Motivations

The motivations behind creating Trifecta are simple; testing, verifying and managing Kafka topics and Zookeeper key-value pairs is an arduous task. The goal of this project is to ease the pain of developing applications that make use of Kafka and ZooKeeper via a console-based tool using simple Unix-like (or SQL-like) commands.

Features

Development

Build Requirements

External Dependencies

In order to build from the source, you'll need to download the above dependencies and issue the following command for each of them:

$ sbt publish-local

Building the applications

Trifecta's build process produces two distinct applications, the command-line interface (trifecta_cli) and the Web-based user interface (trifecta_ui)

Building Trifecta CLI (Command-line interface)

$ sbt "project trifecta_cli" assembly

Building Trifecta UI (Typesafe Play application)

$ sbt "project trifecta_ui" dist

Running the tests

$ sbt clean test    

Configuring the application

On startup, Trifecta reads $HOME/.trifecta/config.properties (or creates the file if it doesn't exist). This file contains the configuration properties and connection strings for all supported systems.

# common properties
trifecta.common.autoSwitching = true
trifecta.common.columns = 25
trifecta.common.debugOn = false
trifecta.common.encoding = UTF-8

# Kafka/Zookeeper properties
trifecta.zookeeper.host = localhost:2181

# supports the setting of a path prefix for multi-tenant Zookeeper setups
#trifecta.zookeeper.kafka.root.path = /kafka

# indicates whether Storm Partition Manager-style consumers should be read from Zookeeper
trifecta.kafka.consumers.storm = false

# Cassandra properties
trifecta.cassandra.hosts = localhost

# ElasticSearch properties
trifecta.elasticsearch.hosts = localhost:9200

# MongoDB properties
trifecta.mongodb.hosts = localhost

Configuring Kafka Consumers

Trifecta currently supports 3 types of consumers:

  • Zookeeper Consumer Groups (Kafka 0.8.x)
  • Kafka-native Consumer Groups (Kafka 0.9.x)
  • Storm Partition Manager Consumers (Apache Storm-specific)

The most common type in use today are the Kafka-native consumers.

Kafka-native Consumer Groups

Kafka-native consumers require the consumer IDs that you want to monitor to be register via the trifecta.kafka.consumers.native property. Only registered consumer IDs (and their respective offsets will be visible).

    trifecta.kafka.consumers.native = dev,test,qa

Zookeeper Consumer Groups

Zookeeper-based consumers are enabled by default; however, they can be disabled (which will improve performance) by setting the trifecta.kafka.consumers.zookeeper property to false.

    trifecta.kafka.consumers.zookeeper = false

Apache Storm Partition Manager Consumer Groups

Storm Partition Manager consumers are disabled by default; however, they can be enabled (which will impact performance) by setting the trifecta.kafka.consumers.storm property be set to true.

    trifecta.kafka.consumers.storm = true

Run the application

To start the Trifecta REPL:

$ java -jar trifecta_cli_0.20.0.bin.jar

Optionally, you can execute Trifecta instructions (commands) right from the command line:

$ java -jar trifecta_cli_0.20.0.bin.jar kls -l

Downloads

Trifecta binaries are available for immediate download in the "releases" section.

What's New

v0.22.0

  • Reimplemented Publish and Query views (ported from v0.20.0)
  • Now offering simultaneous support for Kafka 0.8.x, 0.9.x and 0.10.x

v0.21.2

  • Added support for Kafka 0.10.0.0
  • Added support for AVDL
  • Updated to use MEANS.js 0.2.3.0 (Scalajs-Nodejs)

v0.20.0

  • Trifecta UI (CLI version)
    • Miscellaneous bug fixes
  • Trifecta UI (TypeSafe Play version)
    • Miscellaneous bug fixes

v0.19.2

  • Trifecta UI (TypeSafe version)
    • Fixed issue with out of memory errors while streaming messages

v0.19.1

  • Trifecta UI (CLI version)
    • Fixed issue with missing web resources

v0.19.0

  • Trifecta UI
    • Now a TypeSafe Play Application
    • Updated the user interface
    • Bug fixes

v0.18.1 to v0.18.20

  • Trifecta Core

    • Fixed issue with the application failing if the configuration file is not found
    • Upgraded to Kafka 0.8.2-beta
    • Kafka Query language (KQL) (formerly Big-Data Query Language/BDQL) has grammar simplification
    • Upgraded to Kafka 0.8.2.0
    • Added configuration key to support multi-tenant Zookeeper setups
    • Added support for Kafka v0.8.2.0 v9.0.0 consumers
  • Trifecta UI

    • Added capability to navigate directly from a message (in the Inspect tab) to its decoder (in the Decoders tab)
    • Decoder tab user interface improvements
    • Observe tab user interface improvements
      • The Consumers section has been enhanced to display topic and consumer offset deltas
      • Redesigned the Replicas view to report under-replicated partitions.
      • The Topics section has been enhanced to display topic offset deltas
    • Query tab user interface improvements
      • Multiple queries can be executed concurrently
    • The embedded web server now supports asynchronous request/response flows
    • Added real-time message streaming capability to the Inspect tab
    • Swapped the Inspect and Observe modules
    • Added a new Brokers view to the Observe module
    • Reworked the Brokers view (Inspect module)
    • Fixed sort ordering of partitions in the Replicas view (Inspect module)
    • Fixed potential bug related to retrieving the list of available brokers
    • Now a TypeSafe Play Application w/updated the user interface

Trifecta UI

Trifecta offers a single-page web application (via Angular.js) with a REST service layer and web-socket support, which offers a comprehensive and powerful set of features for inspecting Kafka topic partitions and messages.

Starting Trifecta UI (Play web application)

To start the Play web application, issue the following from the command line:

    $ unzip trifecta_ui-0.20.0.zip
    $ cd trifecta_ui-0.20.0/bin
    $ ./trifecta_ui &

You'll see a few seconds of log messages, then a prompt indicating the web interface is ready for use.

    [warn] application - Logger configuration in conf files is deprecated and has no effect. Use a logback configuration file instead.
    [info] p.a.l.c.ActorSystemProvider - Starting application default Akka system: application
    [info] application - Application has started
    [info] play.api.Play$ - Application started (Prod)
    [info] p.c.s.NettyServer$ - Listening for HTTP on /0:0:0:0:0:0:0:0:9000

Next, open your browser and navigate to http://localhost:9000 (if you're running locally or the hostname/IP address of the remote host where the web application is running).

Configuring Trifecta UI

Additionally, Trifecta UI introduces a few new properties to the application configuration file (located in $HOME/.trifecta/config.properties). NOTE: The property values shown below are the default values.

# the embedded web server host/IP and port for client connections
trifecta.web.host = localhost
trifecta.web.port = 8888

# the interval (in seconds) that changes to consumer offsets will be pushed to web-socket clients
trifecta.web.push.interval.consumer = 15

# the interval (in seconds) that sampling messages will be pushed to web-socket clients
trifecta.web.push.interval.sampling = 2

# the interval (in seconds) that changes to topics (new messages) will be pushed to web-socket clients
trifecta.web.push.interval.topic = 15

# the number of actors to create for servicing requests
trifecta.web.actor.concurrency = 10

Configuring default Avro Decoders

Trifecta UI supports decoding Avro-encoded messages and displaying them in JSON format. To associate an Avro schema to a Kafka topic, place the schema file in a subdirectory with the same name as the topic. For example, if I wanted to associate an Avro file named quotes.avsc to the Kafka topic shocktrade.quotes.avro, I'd setup the following file structure:

$HOME/.trifecta/decoders/shocktrade.quotes.avro/quotes.avsc

The name of the actual schema file can be anything you'd like. Once the file has been placed in the appropriate location, restart Trifecta UI, and your messages will be displayed in JSON format.

Additionally, once a "default" decoder is configured for a Kafka topic, the CLI application can use them as well. For more details about using default decoders with the CLI application click here.

Inspecting Kafka Messages

Trifecta UI has powerful support for viewing Kafka messages, and when the messages are in either JSON or Avro format Trifecta displays them as human readable (read: pretty) JSON documents.

Replicas

Trifecta UI provides a comprehensive view of the current state of replication for each topic partition.

Queries

Trifecta UI also provides a way to execute queries against Avro-encoded topics using the Kafka Query Language (KQL). KQL is a SQL-like language with syntax as follows:

select <fields> from <topic> [with <decoder>]
[where <condition>]
[limit <count>]

Consider the following example:

select symbol, exchange, lastTrade, open, close, high, low
from "shocktrade.quotes.avro"
where lastTrade <= 1 and volume >= 1,000,000
limit 25

The above query retrieves the symbol, exchange, lastTrade, open, close, high and low fields from messages within the Kafka topic shocktrade.quotes.avro (using the default decoder) filtering for only messages where the lastTrade is less than or equal to 1, the volume is greater than or equal to 1,000,000, and limiting the number of results to 25.

Now consider a similar example, except here we'll specify a custom decoder file (avro/quotes.avsc):

select symbol, exchange, lastTrade, open, close, high, low
from "shocktrade.quotes.avro" with "avro:file:avro/quotes.avsc"
where lastTrade <= 1 and volume >= 1,000,000
limit 25

For more detailed information about KQL queries, click here.

Trifecta CLI

Trifecta CLI (Command Line Interface) is a REPL tool that simplifies inspecting Kafka messages, Zookeeper data, and optionally Elastic Search documents and MongoDB documents via simple UNIX-like commands (or SQL-like queries).

Core Module

Trifecta exposes its commands through modules. At any time to see which modules are available one could issue the modules command.

core:/home/ldaniels> modules
+ ---------------------------------------------------------------------------------------------------------- +
| name           className                                                                  loaded  active   |
+ ---------------------------------------------------------------------------------------------------------- +
| cassandra      com.github.ldaniels528.trifecta.modules.cassandra.CassandraModule          true    false    |
| core           com.github.ldaniels528.trifecta.modules.core.CoreModule                    true    true     |
| elasticSearch  com.github.ldaniels528.trifecta.modules.elasticsearch.ElasticSearchModule  true    false    |
| etl            com.github.ldaniels528.trifecta.modules.etl.ETLModule                      true    false    |
| kafka          com.github.ldaniels528.trifecta.modules.kafka.KafkaModule                  true    false    |
| mongodb        com.github.ldaniels528.trifecta.modules.mongodb.MongoModule                true    false    |
| zookeeper      com.github.ldaniels528.trifecta.modules.zookeeper.ZookeeperModule          true    false    |
+ ---------------------------------------------------------------------------------------------------------- +

To execute local system commands, enclose the command you'd like to execute using the back-ticks (`) symbol:

core:/home/ldaniels> `netstat -ptln`

To see all available commands, use the help command (? is a shortcut):

core:/home/ldaniels> ?
+ ---------------------------------------------------------------------------------------------------------------------- +
| command     module     description                                                                                     |
+ ---------------------------------------------------------------------------------------------------------------------- +
| !           core       Executes a previously issued command                                                            |
| $           core       Executes a local system command                                                                 |
| ?           core       Provides the list of available commands                                                         |
| autoswitch  core       Automatically switches to the module of the most recently executed command                      |
.                                                                                                                        .
.                                                                                                                        .
.                                                                                                                        .                                          
| ztree       zookeeper  Retrieves Zookeeper directory structure                                                         |
+ ---------------------------------------------------------------------------------------------------------------------- +

To see the syntax/usage of a command, use the syntax command:

core:/home/ldaniels> syntax kget
Description: Retrieves the message at the specified offset for a given topic partition
Usage: kget [-o outputSource] [-d YYYY-MM-DDTHH:MM:SS] [-a avroSchema] [topic] [partition] [offset]

Kafka Module

To view all of the Kafka commands, use the -m switch and the module name

kafka:/> ? -m kafka
+ ------------------------------------------------------------------------------------------------------------------- +
| command     module  description                                                                                     |
+ ------------------------------------------------------------------------------------------------------------------- +
| kbrokers    kafka   Returns a list of the brokers from ZooKeeper                                                    |
| kcommit     kafka   Commits the offset for a given topic and group                                                  |
| kconsumers  kafka   Returns a list of the consumers from ZooKeeper                                                  |
| kcount      kafka   Counts the messages matching a given condition                                                  |
| kcursor     kafka   Displays the message cursor(s)                                                                  |
| kfetch      kafka   Retrieves the offset for a given topic and group                                                |
| kfetchsize  kafka   Retrieves or sets the default fetch size for all Kafka queries                                  |
| kfind       kafka   Finds messages matching a given condition and exports them to a topic                           |
| kfindone    kafka   Returns the first occurrence of a message matching a given condition                            |
| kfirst      kafka   Returns the first message for a given topic                                                     |
| kget        kafka   Retrieves the message at the specified offset for a given topic partition                       |
| kgetkey     kafka   Retrieves the key of the message at the specified offset for a given topic partition            |
| kgetminmax  kafka   Retrieves the smallest and largest message sizes for a range of offsets for a given partition   |
| kgetsize    kafka   Retrieves the size of the message at the specified offset for a given topic partition           |
| kinbound    kafka   Retrieves a list of topics with new messages (since last query)                                 |
| klast       kafka   Returns the last message for a given topic                                                      |
| kls         kafka   Lists all existing topics                                                                       |
| knext       kafka   Attempts to retrieve the next message                                                           |
| kprev       kafka   Attempts to retrieve the message at the previous offset                                         |
| kreset      kafka   Sets a consumer group ID to zero for all partitions                                             |
| kstats      kafka   Returns the partition details for a given topic                                                 |
| kswitch     kafka   Switches the currently active topic cursor                                                      |
+ ------------------------------------------------------------------------------------------------------------------- +

Kafka Brokers

To list the replica brokers that Zookeeper is aware of:

kafka:/> kbrokers
+ ---------------------------------------------------------- +
| jmx_port  timestamp                host    version  port   |
+ ---------------------------------------------------------- +
| 9999      2014-08-23 19:33:01 PDT  dev501  1        9093   |
| 9999      2014-08-23 18:41:07 PDT  dev501  1        9092   |
| 9999      2014-08-23 18:41:07 PDT  dev501  1        9091   |
| 9999      2014-08-23 20:05:17 PDT  dev502  1        9093   |
| 9999      2014-08-23 20:05:17 PDT  dev502  1        9092   |
| 9999      2014-08-23 20:05:17 PDT  dev502  1        9091   |
+ ---------------------------------------------------------- +

Kafka Topics

To list all of the Kafka topics that Zookeeper is aware of:

kafka:/> kls
+ ------------------------------------------------------ +
| topic                         partitions  replicated   |
+ ------------------------------------------------------ +
| Shocktrade.quotes.csv         5           100%         |
| shocktrade.quotes.avro        5           100%         |
| test.Shocktrade.quotes.avro   5           100%         |
| hft.Shocktrade.quotes.avro    5           100%         |
| test2.Shocktrade.quotes.avro  5           100%         |
| test1.Shocktrade.quotes.avro  5           100%         |
| test3.Shocktrade.quotes.avro  5           100%         |
+ ------------------------------------------------------ +

To see a subset of the topics (matches any topic that starts with the given search term):

kafka:/> kls shocktrade.quotes.avro
+ ------------------------------------------------ +
| topic                   partitions  replicated   |
+ ------------------------------------------------ +
| shocktrade.quotes.avro  5           100%         |
+ ------------------------------------------------ +

To see a detailed list including all partitions, use the -l flag:

kafka:/> kls -l shocktrade.quotes.avro
+ ------------------------------------------------------------------ +
| topic                   partition  leader       replicas  inSync   |
+ ------------------------------------------------------------------ +
| shocktrade.quotes.avro  0          dev501:9092  1         1        |
| shocktrade.quotes.avro  1          dev501:9093  1         1        |
| shocktrade.quotes.avro  2          dev502:9091  1         1        |
| shocktrade.quotes.avro  3          dev502:9092  1         1        |
| shocktrade.quotes.avro  4          dev502:9093  1         1        |
+ ------------------------------------------------------------------ +

To see the statistics for a specific topic, use the kstats command:

kafka:/> kstats Shocktrade.quotes.csv
+ --------------------------------------------------------------------------------- +
| topic                      partition  startOffset  endOffset  messagesAvailable   |
+ --------------------------------------------------------------------------------- +
| Shocktrade.quotes.csv      0          5945         10796      4851                |
| Shocktrade.quotes.csv      1          5160         10547      5387                |
| Shocktrade.quotes.csv      2          3974         8788       4814                |
| Shocktrade.quotes.csv      3          3453         7334       3881                |
| Shocktrade.quotes.csv      4          4364         8276       3912                |
+ --------------------------------------------------------------------------------- +

Kafka Navigable Cursor

The Kafka module offers the concept of a navigable cursor. Any command that references a specific message offset creates a pointer to that offset, called a navigable cursor. Once the cursor has been established, with a single command, you can navigate to the first, last, previous, or next message using the kfirst, klast, kprev and knext commands respectively. Consider the following examples:

To retrieve the first message of a topic partition:

kafka:/> kfirst Shocktrade.quotes.csv 0
[5945:000] 22.47.44.46.22.2c.31.30.2e.38.31.2c.22.39.2f.31.32.2f.32.30.31.34.22.2c.22 | "GDF",10.81,"9/12/2014"," |
[5945:025] 34.3a.30.30.70.6d.22.2c.4e.2f.41.2c.4e.2f.41.2c.2d.30.2e.31.30.2c.22.2d.30 | 4:00pm",N/A,N/A,-0.10,"-0 |
[5945:050] 2e.31.30.20.2d.20.2d.30.2e.39.32.25.22.2c.31.30.2e.39.31.2c.31.30.2e.39.31 | .10 - -0.92%",10.91,10.91 |
[5945:075] 2c.31.30.2e.38.31.2c.31.30.2e.39.31.2c.31.30.2e.38.30.2c.33.36.35.35.38.2c | ,10.81,10.91,10.80,36558, |
[5945:100] 4e.2f.41.2c.22.4e.2f.41.22                                                 | N/A,"N/A"                 |

The previous command resulted in the creation of a navigable cursor (notice below how our prompt has changed).

kafka:Shocktrade.quotes.csv/0:5945> _

Let's view the cursor:

kafka:Shocktrade.quotes.csv/0:5945> kcursor
+ ------------------------------------------------------------------- +
| topic                      partition  offset  nextOffset  decoder   |
+ ------------------------------------------------------------------- +
| Shocktrade.quotes.csv      0          5945    5946                  |
+ ------------------------------------------------------------------- +

Let's view the next message for this topic partition:

kafka:Shocktrade.quotes.csv/0:5945> knext
[5946:000] 22.47.44.50.22.2c.31.38.2e.35.31.2c.22.39.2f.31.32.2f.32.30.31.34.22.2c.22 | "GDP",18.51,"9/12/2014"," |
[5946:025] 34.3a.30.31.70.6d.22.2c.4e.2f.41.2c.4e.2f.41.2c.2d.30.2e.38.39.2c.22.2d.30 | 4:01pm",N/A,N/A,-0.89,"-0 |
[5946:050] 2e.38.39.20.2d.20.2d.34.2e.35.39.25.22.2c.31.39.2e.34.30.2c.31.39.2e.32.37 | .89 - -4.59%",19.40,19.27 |
[5946:075] 2c.31.38.2e.35.31.2c.31.39.2e.33.32.2c.31.38.2e.33.30.2c.31.35.31.36.32.32 | ,18.51,19.32,18.30,151622 |
[5946:100] 30.2c.38.32.32.2e.33.4d.2c.22.4e.2f.41.22                                  | 0,822.3M,"N/A"            |                                                    | M,"N/A"                   |

Let's view the last message for this topic partition:

kafka:Shocktrade.quotes.csv/0:5945> klast
[10796:000] 22.4e.4f.53.50.46.22.2c.30.2e.30.30.2c.22.4e.2f.41.22.2c.22.4e.2f.41.22.2c | "NOSPF",0.00,"N/A","N/A", |
[10796:025] 4e.2f.41.2c.4e.2f.41.2c.4e.2f.41.2c.22.4e.2f.41.20.2d.20.4e.2f.41.22.2c.4e | N/A,N/A,N/A,"N/A - N/A",N |
[10796:050] 2f.41.2c.4e.2f.41.2c.30.2e.30.30.2c.4e.2f.41.2c.4e.2f.41.2c.4e.2f.41.2c.4e | /A,N/A,0.00,N/A,N/A,N/A,N |
[10796:075] 2f.41.2c.22.54.69.63.6b.65.72.20.73.79.6d.62.6f.6c.20.68.61.73.20.63.68.61 | /A,"Ticker symbol has cha |
[10796:100] 6e.67.65.64.20.74.6f.3a.20.3c.61.20.68.72.65.66.3d.22.2f.71.3f.73.3d.4e.4f | nged to: <a href="/q?s=NO |
[10796:125] 53.50.46.22.3e.4e.4f.53.50.46.3c.2f.61.3e.22                               | SPF">NOSPF</a>"           |                                          | ,N/A,"N/A"                |

Notice above we didn't have to specify the topic or partition because it's defined in our cursor. Let's view the cursor again:

kafka:Shocktrade.quotes.csv/0:10796> kcursor
+ ------------------------------------------------------------------- +
| topic                      partition  offset  nextOffset  decoder   |
+ ------------------------------------------------------------------- +
| Shocktrade.quotes.csv      0          10796   10797                 |
+ ------------------------------------------------------------------- +

Now, let's view the previous record:

kafka:Shocktrade.quotes.csv/0:10796> kprev
[10795:000] 22.4d.4c.50.4b.46.22.2c.30.2e.30.30.2c.22.4e.2f.41.22.2c.22.4e.2f.41.22.2c | "MLPKF",0.00,"N/A","N/A", |
[10795:025] 4e.2f.41.2c.4e.2f.41.2c.4e.2f.41.2c.22.4e.2f.41.20.2d.20.4e.2f.41.22.2c.4e | N/A,N/A,N/A,"N/A - N/A",N |
[10795:050] 2f.41.2c.4e.2f.41.2c.30.2e.30.30.2c.4e.2f.41.2c.4e.2f.41.2c.4e.2f.41.2c.4e | /A,N/A,0.00,N/A,N/A,N/A,N |
[10795:075] 2f.41.2c.22.54.69.63.6b.65.72.20.73.79.6d.62.6f.6c.20.68.61.73.20.63.68.61 | /A,"Ticker symbol has cha |
[10795:100] 6e.67.65.64.20.74.6f.3a.20.3c.61.20.68.72.65.66.3d.22.2f.71.3f.73.3d.4d.4c | nged to: <a href="/q?s=ML |
[10795:125] 50.4b.46.22.3e.4d.4c.50.4b.46.3c.2f.61.3e.22                               | PKF">MLPKF</a>"           |                                               | N/A,"N/A"                 |

To retrieve the start and end offsets and number of messages available for a topic across any number of partitions:

kafka:Shocktrade.quotes.csv/0:10795> kstats
+ --------------------------------------------------------------------------------- +
| topic                      partition  startOffset  endOffset  messagesAvailable   |
+ --------------------------------------------------------------------------------- +
| Shocktrade.quotes.csv      0          5945         10796      4851                |
| Shocktrade.quotes.csv      1          5160         10547      5387                |
| Shocktrade.quotes.csv      2          3974         8788       4814                |
| Shocktrade.quotes.csv      3          3453         7334       3881                |
| Shocktrade.quotes.csv      4          4364         8276       3912                |
+ --------------------------------------------------------------------------------- +

NOTE: Above kstats is equivalent to both kstats Shocktrade.quotes.csv and kstats Shocktrade.quotes.csv 0 4. However, because of the cursor we previously established, those arguments could be omitted.

Kafka Consumer Groups

To see the current offsets for all consumer group IDs:

kafka:Shocktrade.quotes.csv/0:10795> kconsumers
+ ------------------------------------------------------------------------------------- +
| consumerId  topic                      partition  offset  topicOffset  messagesLeft   |
+ ------------------------------------------------------------------------------------- +
| dev         Shocktrade.quotes.csv      0          5555    10796        5241           |
| dev         Shocktrade.quotes.csv      1          0       10547        10547          |
| dev         Shocktrade.quotes.csv      2          0       8788         8788           |
| dev         Shocktrade.quotes.csv      3          0       7334         7334           |
| dev         Shocktrade.quotes.csv      4          0       8276         8276           |
+ ------------------------------------------------------------------------------------- +

Let's change the committed offset for the current topic/partition (the one to which our cursor is pointing) to 6000

kafka:Shocktrade.quotes.csv/0:10795> kcommit dev 6000

Let's re-examine the consumer group IDs:

kafka:Shocktrade.quotes.csv/0:10795> kconsumers
+ ------------------------------------------------------------------------------------- +
| consumerId  topic                      partition  offset  topicOffset  messagesLeft   |
+ ------------------------------------------------------------------------------------- +
| dev         Shocktrade.quotes.csv      0          6000    10796        4796           |
| dev         Shocktrade.quotes.csv      1          0       10547        10547          |
| dev         Shocktrade.quotes.csv      2          0       8788         8788           |
| dev         Shocktrade.quotes.csv      3          0       7334         7334           |
| dev         Shocktrade.quotes.csv      4          0       8276         8276           |
+ ------------------------------------------------------------------------------------- +

Notice that the committed offset, for consumer group dev, has been changed to 6000 for partition 0.

Finally, let's use the kfetch to retrieve just the offset for the consumer group ID:

kafka:Shocktrade.quotes.csv/0:10795> kfetch dev
6000

The Kafka Module also provides the capability for watching messages (in near real-time) as they become available. When you watch a topic, a watch cursor is created, which allows you to move forward through the topic as new messages appear. This differs from navigable cursors, which allow you to move back and forth through a topic at will. You can take advantage of this feature by using two built-in commands: kwatch and kwatchnext.

First, kwatch is used to create a connection to a consumer group:

kafka:Shocktrade.quotes.csv/0:10795> kwatch shocktrade.quotes.avro dev -a file:avro/quotes.avsc

Some output will likely follow as the connection (and watch cursor) are established. Next, if a message is already available, it will be returned immediately:

{
  "symbol":"MNLDF",
  "exchange":"OTHER OTC",
  "lastTrade":1.18,
  "tradeDate":null,
  "tradeTime":"1:50pm",
  "ask":null,
  "bid":null,
  "change":0.0,
  "changePct":0.0,
  "prevClose":1.18,
  "open":null,
  "close":1.18,
  "high":null,
  "low":null,
  "volume":0,
  "marketCap":null,
  "errorMessage":null
}

kafka:[w]shocktrade.quotes.avro/4:23070> _

To retrieve any subsequent messages, you use the kwatchnext command:

kafka:[w]shocktrade.quotes.avro/4:23070> kwatchnext

{
  "symbol":"PXYN",
  "exchange":"OTHER OTC",
  "lastTrade":0.075,
  "tradeDate":null,
  "tradeTime":"3:57pm",
  "ask":null,
  "bid":null,
  "change":0.01,
  "changePct":15.38,
  "prevClose":0.065,
  "open":0.064,
  "close":0.075,
  "high":0.078,
  "low":0.064,
  "volume":1325779,
  "marketCap":2.45E7,
  "errorMessage":null
}

kafka:[w]shocktrade.quotes.avro/4:23071> _

Did you notice the "[w]" you see in the prompt? This indicates a watch cursor is active. However, because each call to kwatchnext will also update the navigable cursor, you are free to also use the bi-directional navigation commands knext and kprev (as well as kfirst and klast). Be aware though, that changes to the navigable cursor do not affect the watch cursor. Thus after a subsequent use of kwatchnext, the navigable cursor will be overwritten.

Kafka Inbound Traffic

To retrieve the list of topics with new messages (since your last query):

kafka:/> kinbound
+ ----------------------------------------------------------------------------------------------------------- +
| topic                        partition  startOffset  endOffset  change  msgsPerSec  lastCheckTime           |
+ ----------------------------------------------------------------------------------------------------------- +
| Shocktrade.quotes.avro       3          0            657        65      16.3        09/13/14 06:37:03 PDT   |
| Shocktrade.quotes.avro       0          0            650        64      16.0        09/13/14 06:37:03 PDT   |
| Shocktrade.quotes.avro       1          0            618        56      14.0        09/13/14 06:37:03 PDT   |
| Shocktrade.quotes.avro       2          0            618        49      12.3        09/13/14 06:37:03 PDT   |
| Shocktrade.quotes.avro       4          0            584        40      10.0        09/13/14 06:37:03 PDT   |
+ ----------------------------------------------------------------------------------------------------------- +

Next, we wait a few moments and run the command again:

kafka:/> kinbound
+ ----------------------------------------------------------------------------------------------------------- +
| topic                        partition  startOffset  endOffset  change  msgsPerSec  lastCheckTime           |
+ ----------------------------------------------------------------------------------------------------------- +
| Shocktrade.quotes.avro       1          0            913        295     15.6        09/13/14 06:37:21 PDT   |
| Shocktrade.quotes.avro       3          0            952        295     15.6        09/13/14 06:37:21 PDT   |
| Shocktrade.quotes.avro       2          0            881        263     13.9        09/13/14 06:37:21 PDT   |
| Shocktrade.quotes.avro       4          0            846        262     13.8        09/13/14 06:37:21 PDT   |
| Shocktrade.quotes.avro       0          0            893        243     12.8        09/13/14 06:37:21 PDT   |
+ ----------------------------------------------------------------------------------------------------------- +

Kafka & Avro Integration

Trifecta supports Avro integration for Kafka. The next few examples make use of the following Avro schema:

{
    "type": "record",
    "name": "StockQuote",
    "namespace": "Shocktrade.avro",
    "fields":[
        { "name": "symbol", "type":"string", "doc":"stock symbol" },
        { "name": "lastTrade", "type":["null", "double"], "doc":"last sale price", "default":null },
        { "name": "tradeDate", "type":["null", "long"], "doc":"last sale date", "default":null },
        { "name": "tradeTime", "type":["null", "string"], "doc":"last sale time", "default":null },
        { "name": "ask", "type":["null", "double"], "doc":"ask price", "default":null },
        { "name": "bid", "type":["null", "double"], "doc":"bid price", "default":null },
        { "name": "change", "type":["null", "double"], "doc":"price change", "default":null },
        { "name": "changePct", "type":["null", "double"], "doc":"price change percent", "default":null },
        { "name": "prevClose", "type":["null", "double"], "doc":"previous close price", "default":null },
        { "name": "open", "type":["null", "double"], "doc":"open price", "default":null },
        { "name": "close", "type":["null", "double"], "doc":"close price", "default":null },
        { "name": "high", "type":["null", "double"], "doc":"day's high price", "default":null },
        { "name": "low", "type":["null", "double"], "doc":"day's low price", "default":null },
        { "name": "volume", "type":["null", "long"], "doc":"day's volume", "default":null },
        { "name": "marketCap", "type":["null", "double"], "doc":"market capitalization", "default":null },
        { "name": "errorMessage", "type":["null", "string"], "doc":"error message", "default":null }
    ],
    "doc": "A schema for stock quotes"
}

Let's retrieve the first message from topic Shocktrade.quotes.avro (partition 0) using an Avro schema as our optional message decoder:

kafka:/> kfirst Shocktrade.quotes.avro 0 -a file:avro/quotes.avsc

{
  "symbol":"GES",
  "exchange":"NYSE",
  "lastTrade":21.75,
  "tradeDate":null,
  "tradeTime":"4:03pm",
  "ask":null,
  "bid":null,
  "change":-0.41,
  "changePct":-1.85,
  "prevClose":22.16,
  "open":21.95,
  "close":21.75,
  "high":22.16,
  "low":21.69,
  "volume":650298,
  "marketCap":1.853E9,
  "errorMessage":null
}

Let's view the cursor:

kafka:shocktrade.quotes.avro/2:9728> kcursor
+ ---------------------------------------------------------------------------------- +
| topic                         partition  offset  nextOffset  decoder               |
+ ---------------------------------------------------------------------------------- +
| Shocktrade.quotes.avro        0          0       1           AvroDecoder(schema)   |
+ ---------------------------------------------------------------------------------- +

The kfirst, klast, kprev and knext commands also work with the Avro integration:

kafka:shocktrade.quotes.avro/2:9728> knext

{
  "symbol":"GEO",
  "exchange":"NYSE",
  "lastTrade":41.4,
  "tradeDate":null,
  "tradeTime":"4:03pm",
  "ask":null,
  "bid":null,
  "change":0.76,
  "changePct":1.87,
  "prevClose":40.64,
  "open":40.55,
  "close":41.4,
  "high":41.49,
  "low":40.01,
  "volume":896757,
  "marketCap":2.973E9,
  "errorMessage":null
}

Since Avro is based on JSON, we can also express the same data in Avro compatible JSON format:

kafka:shocktrade.quotes.avro/2:9728> kget -f avro_json

{
  "symbol":"GEO",
  "exchange":{
    "string":"NYSE"
  },
  "lastTrade":{
    "double":41.4
  },
  "tradeDate":null,
  "tradeTime":{
    "string":"4:03pm"
  },
  "ask":null,
  "bid":null,
  "change":{
    "double":0.76
  },
  "changePct":{
    "double":1.87
  },
  "prevClose":{
    "double":40.64
  },
  "open":{
    "double":40.55
  },
  "close":{
    "double":41.4
  },
  "high":{
    "double":41.49
  },
  "low":{
    "double":40.01
  },
  "volume":{
    "long":896757
  },
  "marketCap":{
    "double":2.973E9
  },
  "errorMessage":null
}

Default Avro Decoders

You can also associate a Kafka topic to a "default" Avro schema. To associate an Avro schema to the topic, place the schema file in a subdirectory with the same name as the topic. For example, if I wanted to associate an Avro file named quotes.avsc to the Kafka topic shocktrade.quotes.avro, I'd setup the following file structure:

$HOME/.trifecta/decoders/Shocktrade.quotes.avro/quotes.avsc

The name of the actual schema file can be anything you'd like. Once the file has been placed in the appropriate location, restart Trifecta, and default decoder will be ready for use.

In our last example, we did the following:

kafka:/> kfirst Shocktrade.quotes.avro 0 -a file:avro/quotes.avsc

With a default decoder, we can do this instead:

kafka:/> kfirst Shocktrade.quotes.avro 0 -a default

Kafka Search by Key

You can view the key for any message by using the kgetkey command. Let's start by retrieving the last available message for a topic/partition.

kafka:/> klast Shocktrade.quotes.csv 0
[10796:000] 22.4e.4f.53.50.46.22.2c.30.2e.30.30.2c.22.4e.2f.41.22.2c.22.4e.2f.41.22.2c | "NOSPF",0.00,"N/A","N/A", |
[10796:025] 4e.2f.41.2c.4e.2f.41.2c.4e.2f.41.2c.22.4e.2f.41.20.2d.20.4e.2f.41.22.2c.4e | N/A,N/A,N/A,"N/A - N/A",N |
[10796:050] 2f.41.2c.4e.2f.41.2c.30.2e.30.30.2c.4e.2f.41.2c.4e.2f.41.2c.4e.2f.41.2c.4e | /A,N/A,0.00,N/A,N/A,N/A,N |
[10796:075] 2f.41.2c.22.54.69.63.6b.65.72.20.73.79.6d.62.6f.6c.20.68.61.73.20.63.68.61 | /A,"Ticker symbol has cha |
[10796:100] 6e.67.65.64.20.74.6f.3a.20.3c.61.20.68.72.65.66.3d.22.2f.71.3f.73.3d.4e.4f | nged to: <a href="/q?s=NO |
[10796:125] 53.50.46.22.3e.4e.4f.53.50.46.3c.2f.61.3e.22                               | SPF">NOSPF</a>"           |

Now let's view the key for message using the kgetkey command:

kafka:Shocktrade.quotes.csv/0:10796> kgetkey
[000] 31.34.31.30.35.36.33.37.31.34.34.36.35                                     | 1410563714465             |

To ensure you'll notice the change in the cursor's position, let's reposition the cursor to the beginning of the topic/partition:

kafka:Shocktrade.quotes.csv/0:10796> kfirst
[5945:000] 22.47.44.46.22.2c.31.30.2e.38.31.2c.22.39.2f.31.32.2f.32.30.31.34.22.2c.22 | "GDF",10.81,"9/12/2014"," |
[5945:025] 34.3a.30.30.70.6d.22.2c.4e.2f.41.2c.4e.2f.41.2c.2d.30.2e.31.30.2c.22.2d.30 | 4:00pm",N/A,N/A,-0.10,"-0 |
[5945:050] 2e.31.30.20.2d.20.2d.30.2e.39.32.25.22.2c.31.30.2e.39.31.2c.31.30.2e.39.31 | .10 - -0.92%",10.91,10.91 |
[5945:075] 2c.31.30.2e.38.31.2c.31.30.2e.39.31.2c.31.30.2e.38.30.2c.33.36.35.35.38.2c | ,10.81,10.91,10.80,36558, |
[5945:100] 4e.2f.41.2c.22.4e.2f.41.22                                                 | N/A,"N/A"                 |

kafka:Shocktrade.quotes.csv/0:5945> _

Next, using the kfindonecommand let's search for the last message by key (using hexadecimal dot-notation):

kafka:Shocktrade.quotes.csv/0:5945> kfindone key is 31.34.31.30.35.36.33.37.31.34.34.36.35
Task is now running in the background (use 'jobs' to view)

You may have received the message Task is now running in the background (use 'jobs' to view). This happens whenever a query is executed that requires more than 5 seconds to complete. You may use the jobs command to check the status of a background task; however, either way, after a few moments the results should appear on the console:

kafka:Shocktrade.quotes.csv/0:5945> Job #1006 completed
[10796:000] 22.4e.4f.53.50.46.22.2c.30.2e.30.30.2c.22.4e.2f.41.22.2c.22.4e.2f.41.22.2c | "NOSPF",0.00,"N/A","N/A", |
[10796:025] 4e.2f.41.2c.4e.2f.41.2c.4e.2f.41.2c.22.4e.2f.41.20.2d.20.4e.2f.41.22.2c.4e | N/A,N/A,N/A,"N/A - N/A",N |
[10796:050] 2f.41.2c.4e.2f.41.2c.30.2e.30.30.2c.4e.2f.41.2c.4e.2f.41.2c.4e.2f.41.2c.4e | /A,N/A,0.00,N/A,N/A,N/A,N |
[10796:075] 2f.41.2c.22.54.69.63.6b.65.72.20.73.79.6d.62.6f.6c.20.68.61.73.20.63.68.61 | /A,"Ticker symbol has cha |
[10796:100] 6e.67.65.64.20.74.6f.3a.20.3c.61.20.68.72.65.66.3d.22.2f.71.3f.73.3d.4e.4f | nged to: <a href="/q?s=NO |
[10796:125] 53.50.46.22.3e.4e.4f.53.50.46.3c.2f.61.3e.22                               | SPF">NOSPF</a>"           |

The result is the last message of the topic. Notice that our cursor has changed reflecting the move from offset 5945 to 10796.

kafka:Shocktrade.quotes.csv/0:10796> _

Kafka Advanced Search

Building on the Avro Integration, Trifecta offers the ability to execute queries against structured data.

Suppose you want to know how many messages contain a volume greater than 1,000,000, you could issue the kcount command:

kafka:Shocktrade.quotes.avro/0:1> kcount volume > 1000000
1350

The response was 1350, meaning there are 1350 messages containing a volume greater than 1,000,000.

Suppose you want to find a message for Apple (ticker: "AAPL"), you could issue the kfindone command:

kafka:Shocktrade.quotes.avro/0:1> kfindone symbol == "AAPL"

{
  "symbol":"AAPL",
  "exchange":"NASDAQNM",
  "lastTrade":109.01,
  "tradeDate":null,
  "tradeTime":"4:00pm",
  "ask":109.03,
  "bid":109.01,
  "change":0.31,
  "changePct":0.29,
  "prevClose":108.7,
  "open":108.72,
  "close":109.01,
  "high":109.32,
  "low":108.55,
  "volume":33691536,
  "marketCap":6.393E11,
  "errorMessage":null
}

You can also specify complex queries by combining multiple expressions with the and keyword:

kafka:Shocktrade.quotes.avro/1234:4> kfindone lastTrade < 1 and volume > 1000000 -a file:avro/quotes.avsc

{
  "symbol":"MDTV",
  "exchange":"OTHER OTC",
  "lastTrade":0.022,
  "tradeDate":null,
  "tradeTime":"3:27pm",
  "ask":null,
  "bid":null,
  "change":0.0099,
  "changePct":81.82,
  "prevClose":0.0121,
  "open":0.015,
  "close":0.022,
  "high":0.027,
  "low":0.015,
  "volume":1060005,
  "marketCap":125000.0,
  "errorMessage":null
}

Now suppose you want to copy the messages having high volume (1,000,000 or more) to another topic:

kafka:Shocktrade.quotes.avro/3300:3> kfind volume >= 1000000 -o topic:hft.Shocktrade.quotes.avro

Finally, let's look at the results:

kafka:Shocktrade.quotes.avro/3:0> kstats hft.Shocktrade.quotes.avro
+ ---------------------------------------------------------------------------------- +
| topic                       partition  startOffset  endOffset  messagesAvailable   |
+ ---------------------------------------------------------------------------------- +
| hft.Shocktrade.quotes.avro  0          0            283        283                 |
| hft.Shocktrade.quotes.avro  1          0            287        287                 |
| hft.Shocktrade.quotes.avro  2          0            258        258                 |
| hft.Shocktrade.quotes.avro  3          0            245        245                 |
| hft.Shocktrade.quotes.avro  4          0            272        272                 |
+ ---------------------------------------------------------------------------------- +

Let's see how these statistics compares to the original:

kafka:Shocktrade.quotes.avro/3:0> kstats Shocktrade.quotes.avro
+ ----------------------------------------------------------------------------------- +
| topic                        partition  startOffset  endOffset  messagesAvailable   |
+ ----------------------------------------------------------------------------------- +
| Shocktrade.quotes.avro       0          0            4539       4539                |
| Shocktrade.quotes.avro       1          0            4713       4713                |
| Shocktrade.quotes.avro       2          0            4500       4500                |
| Shocktrade.quotes.avro       3          0            4670       4670                |
| Shocktrade.quotes.avro       4          0            4431       4431                |
+ ----------------------------------------------------------------------------------- +

Searching By Query

Trifecta provides the ability to perform SQL-like queries against Avro-encoded Kafka topics (Read more about Trifecta's Kafka-Avro integration here). The syntax is very similar to SQL except for a few minor differences. Here's the basic syntax:

select <fieldsToDisplay>
from <topic> with <decoder>
where <searchCriteria>
limit <maximumNumberOfResultsToReturn>

Where decoder may be:

  • json
  • avro:file:path

Consider the following example:

kafka:shocktrade.quotes.avro/0:32050> select symbol, exchange, open, close, high, low
                                      from shocktrade.quotes.avro
                                      with "avro:file:avro/quotes.avsc"
                                      where symbol == "AAPL"

NOTE: Because we didn't specify a limit for the number of results that could be returned, the default value (25) is used.

As with most potentially long-running statements in Trifecta, if the query takes longer than a few seconds to complete, it will be executed in the background.

kafka:shocktrade.quotes.avro/0:32050> Job #607 completed (use 'jobs -v 607' to view results)
+ --------------------------------------------------------------------- +
| partition  offset  symbol  exchange  open    close   high    low      |
+ --------------------------------------------------------------------- +
| 0          32946   AAPL    NASDAQNM  108.72  109.01  109.32  108.55   |
+ --------------------------------------------------------------------- +

NOTE: Although the partition and offset fields weren't specified in the query, they are always included in the query results.

Let's look at another example:

kafka:shocktrade.quotes.avro/0:32050> select symbol, exchange, lastTrade, open, close, high, low
                                      from shocktrade.quotes.avro
                                      with "avro:file:avro/quotes.avsc"
                                      where lastTrade <= 1 and volume >= 1,000,000
                                      limit 25

Task is now running in the background (use 'jobs' to view)
kafka:shocktrade.quotes.avro/0:32050> Job #873 completed (use 'jobs -v 873' to view results)
+ --------------------------------------------------------------------------------- +
| partition  offset  symbol  exchange   lastTrade  open    close   high    low      |
+ --------------------------------------------------------------------------------- +
| 3          34047   NIHDQ   OTHER OTC  0.0509     0.0452  0.0509  0.0549  0.0452   |
| 3          33853   IMRS    NASDAQNM   0.2768     0.25    0.2768  0.28    0.2138   |
| 3          33818   VTMB    OTHER OTC  0.0014     0.0013  0.0014  0.0014  0.0012   |
| 3          33780   MLHC    OTHER OTC  4.0E-4     5.0E-4  4.0E-4  5.0E-4  3.0E-4   |
| 3          33709   ECDC    OTHER OTC  1.0E-4     1.0E-4  1.0E-4  1.0E-4  1.0E-4   |
| 3          33640   PWDY    OTC BB     0.0037     0.0032  0.0037  0.0043  0.003    |
| 3          33534   BPZ     NYSE       0.9599     1.02    0.9599  1.0201  0.92     |
| 3          33520   TAGG    OTHER OTC  2.0E-4     1.0E-4  2.0E-4  2.0E-4  1.0E-4   |
| 3          33515   MDMN    OTHER OTC  0.055      0.051   0.055   0.059   0.051    |
| 3          33469   MCET    OTHER OTC  5.0E-4     5.0E-4  5.0E-4  5.0E-4  5.0E-4   |
| 3          33460   GGSM    OTHER OTC  3.0E-4     3.0E-4  3.0E-4  4.0E-4  3.0E-4   |
| 3          33404   TDCP    OTHER OTC  0.0041     0.0041  0.0041  0.0041  0.0038   |
| 3          33337   GSS     AMEX       0.305      0.27    0.305   0.31    0.266    |
| 3          33254   MDTV    OTHER OTC  0.022      0.015   0.022   0.027   0.015    |
| 2          33246   AMRN    NGM        0.9        0.9128  0.9     0.93    0.88     |
| 2          33110   TRTC    OTHER OTC  0.38       0.3827  0.38    0.405   0.373    |
| 2          33068   AEMD    OTHER OTC  0.2419     0.26    0.2419  0.2625  0.23     |
| 2          33060   ZBB     AMEX       0.6101     0.65    0.6101  0.65    0.55     |
| 2          33058   TUNG    OTHER OTC  0.0019     0.0021  0.0019  0.0021  0.0016   |
| 2          33011   DKAM    OTHER OTC  1.0E-4     2.0E-4  1.0E-4  2.0E-4  1.0E-4   |
| 2          32984   ADMD    OTHER OTC  0.001      0.001   0.001   0.0011  9.0E-4   |
| 2          32905   GLDG    OTHER OTC  2.0E-4     2.0E-4  2.0E-4  2.0E-4  2.0E-4   |
| 2          32808   RBY     AMEX       0.9349     0.821   0.9349  0.94    0.821    |
| 2          32751   PZG     AMEX       0.708      0.6     0.708   0.73    0.5834   |
| 2          32731   PAL     AMEX       0.15       0.15    0.15    0.1555  0.145    |
+ --------------------------------------------------------------------------------- +

Zookeeper Module

Zookeeper: Navigating directories and keys

To view the Zookeeper keys at the current hierarchy level:

zookeeper@dev501:2181:/> zls
consumers
storm
controller_epoch
admin
controller
brokers

To change the current Zookeeper hierarchy level:

zookeeper:localhost:2181:/> zcd brokers
/brokers

Now view the keys at this level:

zookeeper:localhost:2181:/brokers> zls
topics
ids

Let's look at the entire Zookeeper hierarchy recursively from our current path:

zookeeper:localhost:2181/brokers> ztree
/brokers
/brokers/topics
/brokers/topics/shocktrade.quotes.avro
/brokers/topics/shocktrade.quotes.avro/partitions
/brokers/topics/shocktrade.quotes.avro/partitions/0
/brokers/topics/shocktrade.quotes.avro/partitions/0/state
/brokers/topics/shocktrade.quotes.avro/partitions/1
/brokers/topics/shocktrade.quotes.avro/partitions/1/state
/brokers/topics/shocktrade.quotes.avro/partitions/2
/brokers/topics/shocktrade.quotes.avro/partitions/2/state
/brokers/topics/shocktrade.quotes.avro/partitions/3
/brokers/topics/shocktrade.quotes.avro/partitions/3/state
/brokers/topics/shocktrade.quotes.avro/partitions/4
/brokers/topics/shocktrade.quotes.avro/partitions/4/state
/brokers/ids
/brokers/ids/3
/brokers/ids/2
/brokers/ids/1
/brokers/ids/6
/brokers/ids/5
/brokers/ids/4

Zookeeper: Getting and setting key-value pairs

Let's view the contents of one of the keys:

zookeeper:localhost:2181/brokers> zget topics/Shocktrade.quotes.csv/partitions/4/state
[00] 7b.22.63.6f.6e.74.72.6f.6c.6c.65.72.5f.65.70.6f.63.68.22.3a.31.2c.22.6c.65 | {"controller_epoch":1,"le
[25] 61.64.65.72.22.3a.35.2c.22.76.65.72.73.69.6f.6e.22.3a.31.2c.22.6c.65.61.64 | ader":5,"version":1,"lead
[50] 65.72.5f.65.70.6f.63.68.22.3a.30.2c.22.69.73.72.22.3a.5b.35.5d.7d          | er_epoch":0,"isr":[5]}

Since we now know the contents of the key is text-based (JSON in this case), let's look at the JSON value using the format flag (-f json) to transform the JSON in a nice human-readable format.

zookeeper:localhost:2181/brokers> zget topics/Shocktrade.quotes.csv/partitions/4/state -f json
{
  "controller_epoch": 3,
  "leader": 6,
  "version": 1,
  "leader_epoch": 2,
  "isr": [6]
}

Next, let's set a key-value pair in Zookeeper:

zookeeper:localhost:2181/> zput /test/message "Hello World"

To retrieve the value we've just set, we can use the zget command again:

zookeeper:localhost:2181/> zget /test/message
[00] 48.65.6c.6c.6f.20.57.6f.72.6c.64                                           | Hello World

The zput command also allows us to set other types of values besides strings. The following example demonstrates setting a binary array literal using hexadecimal dot-notation.

zookeeper:localhost:2181/> zput /test/data de.ad.be.ef.ca.fe.ba.be

The default value types for integer and decimal values are long and double respectively. However, you can also explicitly set the value type using the type flag (-t).

zookeeper:localhost:2181/> zput /test/data2 123.45 -t float

The valid value types are:

  • bytes
  • char
  • double
  • float
  • integer
  • json
  • long
  • short
  • string

To verify that all of the key-value pairs were inserted we use the zls command again:

zookeeper:localhost:2181/> zls /test
data
message
data2

To view all of the Zookeeper commands, use the -m switch and the module name (zookeeper in this case):

zookeeper:localhost:2181/> ? -m zookeeper
+ ----------------------------------------------------------------------------------------- +
| command     module     description                                                        |
+ ----------------------------------------------------------------------------------------- +
| zcd         zookeeper  Changes the current path/directory in ZooKeeper                    |
| zexists     zookeeper  Verifies the existence of a ZooKeeper key                          |
| zget        zookeeper  Retrieves the contents of a specific Zookeeper key                 |
| zls         zookeeper  Retrieves the child nodes for a key from ZooKeeper                 |
| zmk         zookeeper  Creates a new ZooKeeper sub-directory (key)                        |
| zput        zookeeper  Sets a key-value pair in ZooKeeper                                 |
| zreconnect  zookeeper  Re-establishes the connection to Zookeeper                         |
| zrm         zookeeper  Removes a key-value from ZooKeeper (DESTRUCTIVE)                   |
| zruok       zookeeper  Checks the status of a Zookeeper instance (requires netcat)        |
| zstats      zookeeper  Returns the statistics of a Zookeeper instance (requires netcat)   |
| ztree       zookeeper  Retrieves Zookeeper directory structure                            |
+ ----------------------------------------------------------------------------------------- +

Cassandra Module

Trifecta has basic built-in support for querying and inspecting the state of Apache Cassandra clusters. To establish a connection to a Cassandra cluster, use the cqconnect command:

core:/home/ldaniels> cqconnect dev528

From this point, you need to select a keyspace to execute queries against. To do this, use the keyspace command:

cql:cluster1:/> keyspace shocktrade

Now that a keyspace has been selected, you can execute queries against the column families (tables) found within the keyspace.

cql:cluster1:shocktrade> cql "select name, symbol, exchange, lastTrade, volume from stocks where symbol = 'AAPL' limit 5"
+ --------------------------------------------------- +
| name        lasttrade  exchange  symbol  volume     |
+ --------------------------------------------------- +
| Apple Inc.  557.36     NASDAQ    AAPL    14067517   |
+ --------------------------------------------------- +

You can view basic cluster information by issuing the clusterinfo command:

cql:cluster1:shocktrade> clusterinfo
+ -------------------------------------------------------------------- +
| name                   value                                         |
+ -------------------------------------------------------------------- +
| Cluster Name           cluster1                                      |
| Partitioner            org.apache.cassandra.dht.Murmur3Partitioner   |
| Consistency Level      ONE                                           |
| Fetch Size             5000                                          |
| JMX Reporting Enabled  true                                          |
+ -------------------------------------------------------------------- +    

You can describe keyspaces and tables with the describe command:

cql:cluster1:shocktrade> describe keyspace shocktrade 

CREATE KEYSPACE shocktrade WITH REPLICATION = { 'class' : 'org.apache.cassandra.locator.SimpleStrategy', 
'replication_factor': '3' } AND DURABLE_WRITES = true;

You can retrieve the list of all the commands that Cassandra Module offers with the help (?) command.

? -m cassandra

Elastic Search Module

To establish a connection to a local/remote Elastic Search peer, use the econnect command:

 core:/home/ldaniels> econnect dev501:9200    
 + ----------------------------------- +
 | name                   value        |
 + ----------------------------------- +
 | Cluster Name           ShockTrade   |
 | Status                 green        |
 | Timed Out              false        |
 | Number of Nodes        5            |
 | Number of Data Nodes   5            |
 | Active Shards          10           |
 | Active Primary Shards  5            |
 | Initializing Shards    0            |
 | Relocating Shards      0            |
 | Unassigned Shards      0            |
 + ----------------------------------- +

Once connected, the server statistics above will be returned.

To create a document, use the eput command:

elasticSearch:localhost:9200/> eput /quotes/quote/AMD { "symbol":"AMD", "lastSale":3.33 }

+ --------------------------------------- +
| created  _index  _type  _id  _version   |
+ --------------------------------------- +
| true     quotes  quote  AMD  3          |
+ --------------------------------------- +

To retrieve the document we've just created, use the eget command:

elasticSearch:localhost:9200/quotes/quote/AMD> eget /quotes/quote/AMD  
{
  "symbol":"AMD",
  "lastSale":3.55
}    

Elastic Search: Avro to JSON

Now let's do something slightly more advanced. Let's use Trifecta's powerful search and copy features to copy a message from a Kafka Topic to create (or update) an Elastic Search document. NOTE: Some setup steps have been omitted for brevity. See Kafka Advanced Search for full details.

elasticSearch:localhost:9200/quotes/quote/AMD> kfindone symbol == "AAPL" -o es:/quotes/quote/AAPL

{
  "symbol":"AAPL",
  "exchange":"NASDAQNM",
  "lastTrade":109.01,
  "tradeDate":null,
  "tradeTime":"4:00pm",
  "ask":109.03,
  "bid":109.01,
  "change":0.31,
  "changePct":0.29,
  "prevClose":108.7,
  "open":108.72,
  "close":109.01,
  "high":109.32,
  "low":108.55,
  "volume":33691536,
  "marketCap":6.393E11,
  "errorMessage":null
}

Finally, let's view the document we've created:

 kafka:shocktrade.quotes.avro/4:5429> eget /quotes/quote/AAPL
 
{
  "symbol":"AAPL",
  "exchange":"NASDAQNM",
  "lastTrade":109.01,
  "tradeDate":null,
  "tradeTime":"4:00pm",
  "ask":109.03,
  "bid":109.01,
  "change":0.31,
  "changePct":0.29,
  "prevClose":108.7,
  "open":108.72,
  "close":109.01,
  "high":109.32,
  "low":108.55,
  "volume":33691536,
  "marketCap":6.393E11,
  "errorMessage":null
}

Please note we could also use any of the Kafka message retrieve commands (kget, kfirst, knext, kprev and klast) to copy a Kafka message as an Elastic Search document. See the following example:

kafka:shocktrade.quotes.avro/4:5429> kget -o es:/quotes/quote/AAPL

Sometimes we need to copy a set of messages, and copying them one-by-one can be tedious and time-consuming; however, there is a command for just this sort of use-case. The copy command can be used to copy messages from any Avro- or JSON-capable input source (e.g. Kafka or Elastic Search) to any Avro- or JSON-capable output source. Consider the following example. Here we're copying messages from a Kafka topic to an Elastic Search index using the symbol field of the message as the ID for the Elastic Search document.

es:localhost:9200/> copy -i topic:shocktrade.quotes.avro -o es:/quotes/quote/$symbol -a file:avro/quotes.avsc

Once the operation has completed, the copy statistics are displayed:

es:localhost:9200/> Job #1845 completed (use 'jobs -v 1845' to view results)
+ -------------------------------------------------- +
| runTimeSecs  records  failures  recordsPerSecond   |
+ -------------------------------------------------- +
| 27.0         3367     0         143.3              |
+ -------------------------------------------------- +

MongoDB Module

Let's start by connecting to a MongoDB instance:

mongo:mongodb$> mconnect dev601

Next, let's choose a database to work in:

mongo:mongodb$> use shocktrade

Finally, let's retrieve a document. In this example, we'll retrieve a stock quote for Apple Inc. (ticker: AAPL)

mongo:mongodb$> mget Stocks { "symbol" : "AAPL" }
{
  "_id":{
    "$oid":"51002b2d84aebf0342cfb659"
  },
  "EBITDA":5.913E10,
  "active":true,
  "ask":98.77,
  "askSize":null,
  "assetClass":"Equity",
  "assetType":"Common Stock",
  "avgVolume":null,
  "avgVolume10Day":59306900,
  "avgVolume3Month":54995300,
  "baseSymbol":null,
  "beta":1.03,
  "bid":98.76,
  "bidSize":null,
  "bookValuePerShare":20.19,
  "businessSummary":"\n    Apple Inc. designs, manufactures, and markets personal computers and related personal computing and mobile communication devices along with a variety of related software, services, peripherals, and networking solutions. The Company sells its products worldwide through its online stores, its retail stores, its direct sales force, third-party wholesalers, and resellers.\n  ",
  "change":-0.42,
  "change52Week":44.37,
  "change52WeekHigh":null,
  "change52WeekLow":null,
  "change52WeekSNP500":16.41,
  "changePct":-0.42,
  "cikNumber":320193,
  "close":98.76
}

To view all of the MongoDB commands, use the -m switch and the module name (mongodb in this case):

mongo:mongodb$> ? -m mongodb
+ ------------------------------------------------------------------ +
| command   module   description                                     |
+ ------------------------------------------------------------------ +
| mconnect  mongodb  Establishes a connection to a MongoDB cluster   |
| mfindone  mongodb  Retrieves a document from MongoDB               |
| mget      mongodb  Retrieves a document from MongoDB               |
| mput      mongodb  Inserts a document into MongoDB                 |
| use       mongodb  Sets the current MongoDB database               |
+ ------------------------------------------------------------------ +            

trifecta's People

Contributors

chatu avatar ldaniels528 avatar mariussoutier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trifecta's Issues

batch script update

Hi,
could you please update the batch script to use wildcard on the classpath
To run the ui on windows I ended up changing

set "APP_CLASSPATH=%APP_LIB_DIR%..\conf;%APP_LIB_DIR%\com.github.ldaniels528.trifecta-ui-0.22.0rc8b-0.10.1.0.jar;%APP_LIB_DIR%\com.github.ldaniels528.commons-helpers-0.22.0rc8b.jar;%APP_LIB_DIR%\com.github.ldaniels528.trifecta-core-0.22.0rc8b.jar;%APP_LIB_DIR%\com.github.ldaniels528.tabular-0.22.0rc8b.jar;%APP_LIB_DIR%\com.github.ldaniels528.trifecta-common-0.22.0rc8b.jar;%APP_LIB_DIR%\org.scala-lang.scala-library-2.11.8.jar;%APP_LIB_DIR%\org.slf4j.slf4j-api-1.7.21.jar;%APP_LIB_DIR%\commons-io.commons-io-2.4.jar;%APP_LIB_DIR%\net.liftweb.lift-json_2.11-3.0-M7.jar;%APP_LIB_DIR%\org.scala-lang.scalap-2.11.8.jar;%APP_LIB_DIR%\org.scala-lang.scala-compiler-2.11.8.jar;%APP_LIB_DIR%\org.scala-lang.scala-reflect-2.11.8.jar;%APP_LIB_DIR%\org.scala-lang.modules.scala-xml_2.11-1.0.4.jar;%APP_LIB_DIR%\org.scala-lang.modules.scala-parser-combinators_2.11-1.0.4.jar;%APP_LIB_DIR%\com.typesafe.akka.akka-actor_2.11-2.3.14.jar;%APP_LIB_DIR%\com.typesafe.play.play-json_2.11-2.4.8.jar;%APP_LIB_DIR%\com.typesafe.play.play-iteratees_2.11-2.4.8.jar;%APP_LIB_DIR%\org.scala-stm.scala-stm_2.11-0.7.jar;%APP_LIB_DIR%\com.typesafe.config-1.3.0.jar;%APP_LIB_DIR%\com.typesafe.play.play-functional_2.11-2.4.8.jar;%APP_LIB_DIR%\com.typesafe.play.play-datacommons_2.11-2.4.8.jar;%APP_LIB_DIR%\joda-time.joda-time-2.8.1.jar;%APP_LIB_DIR%\org.joda.joda-convert-1.7.jar;%APP_LIB_DIR%\com.fasterxml.jackson.core.jackson-core-2.5.4.jar;%APP_LIB_DIR%\com.fasterxml.jackson.core.jackson-annotations-2.5.4.jar;%APP_LIB_DIR%\com.fasterxml.jackson.core.jackson-databind-2.5.4.jar;%APP_LIB_DIR%\com.fasterxml.jackson.datatype.jackson-datatype-jdk8-2.5.4.jar;%APP_LIB_DIR%\com.fasterxml.jackson.datatype.jackson-datatype-jsr310-2.5.4.jar;%APP_LIB_DIR%\com.twitter.bijection-avro_2.11-0.9.2.jar;%APP_LIB_DIR%\com.twitter.bijection-core_2.11-0.9.2.jar;%APP_LIB_DIR%\org.codehaus.jackson.jackson-core-asl-1.9.13.jar;%APP_LIB_DIR%\org.codehaus.jackson.jackson-mapper-asl-1.9.13.jar;%APP_LIB_DIR%\org.apache.avro.avro-compiler-1.8.1.jar;%APP_LIB_DIR%\org.apache.avro.avro-1.8.1.jar;%APP_LIB_DIR%\com.thoughtworks.paranamer.paranamer-2.7.jar;%APP_LIB_DIR%\org.tukaani.xz-1.5.jar;%APP_LIB_DIR%\commons-lang.commons-lang-2.6.jar;%APP_LIB_DIR%\org.apache.velocity.velocity-1.7.jar;%APP_LIB_DIR%\commons-collections.commons-collections-3.2.1.jar;%APP_LIB_DIR%\org.apache.curator.curator-framework-3.1.0.jar;%APP_LIB_DIR%\org.apache.curator.curator-client-3.1.0.jar;%APP_LIB_DIR%\org.apache.zookeeper.zookeeper-3.5.1-alpha.jar;%APP_LIB_DIR%\commons-cli.commons-cli-1.2.jar;%APP_LIB_DIR%\log4j.log4j-1.2.17.jar;%APP_LIB_DIR%\net.java.dev.javacc.javacc-5.0.jar;%APP_LIB_DIR%\org.apache.curator.curator-test-3.1.0.jar;%APP_LIB_DIR%\org.apache.commons.commons-math-2.2.jar;%APP_LIB_DIR%\org.apache.kafka.kafka_2.11-0.10.1.0.jar;%APP_LIB_DIR%\org.apache.kafka.kafka-clients-0.10.1.0.jar;%APP_LIB_DIR%\net.jpountz.lz4.lz4-1.3.0.jar;%APP_LIB_DIR%\org.xerial.snappy.snappy-java-1.1.2.6.jar;%APP_LIB_DIR%\net.sf.jopt-simple.jopt-simple-4.9.jar;%APP_LIB_DIR%\com.yammer.metrics.metrics-core-2.2.0.jar;%APP_LIB_DIR%\com.101tec.zkclient-0.9.jar;%APP_LIB_DIR%\org.scala-js.scalajs-library_2.11-0.6.14.jar;%APP_LIB_DIR%\com.typesafe.play.twirl-api_2.11-1.1.1.jar;%APP_LIB_DIR%\org.apache.commons.commons-lang3-3.4.jar;%APP_LIB_DIR%\com.typesafe.play.play-server_2.11-2.4.0.jar;%APP_LIB_DIR%\com.typesafe.play.play_2.11-2.4.0.jar;%APP_LIB_DIR%\com.typesafe.play.build-link-2.4.0.jar;%APP_LIB_DIR%\com.typesafe.play.play-exceptions-2.4.0.jar;%APP_LIB_DIR%\org.javassist.javassist-3.19.0-GA.jar;%APP_LIB_DIR%\com.typesafe.play.play-netty-utils-2.4.0.jar;%APP_LIB_DIR%\org.slf4j.jul-to-slf4j-1.7.12.jar;%APP_LIB_DIR%\org.slf4j.jcl-over-slf4j-1.7.12.jar;%APP_LIB_DIR%\ch.qos.logback.logback-core-1.1.3.jar;%APP_LIB_DIR%\ch.qos.logback.logback-classic-1.1.3.jar;%APP_LIB_DIR%\com.typesafe.akka.akka-slf4j_2.11-2.3.11.jar;%APP_LIB_DIR%\commons-codec.commons-codec-1.10.jar;%APP_LIB_DIR%\xerces.xercesImpl-2.11.0.jar;%APP_LIB_DIR%\xml-apis.xml-apis-1.4.01.jar;%APP_LIB_DIR%\javax.transaction.jta-1.1.jar;%APP_LIB_DIR%\com.google.inject.guice-4.0.jar;%APP_LIB_DIR%\javax.inject.javax.inject-1.jar;%APP_LIB_DIR%\aopalliance.aopalliance-1.0.jar;%APP_LIB_DIR%\com.google.inject.extensions.guice-assistedinject-4.0.jar;%APP_LIB_DIR%\com.typesafe.play.play-netty-server_2.11-2.4.0.jar;%APP_LIB_DIR%\io.netty.netty-3.10.3.Final.jar;%APP_LIB_DIR%\com.typesafe.netty.netty-http-pipelining-1.1.4.jar;%APP_LIB_DIR%\com.typesafe.play.play-cache_2.11-2.4.0.jar;%APP_LIB_DIR%\net.sf.ehcache.ehcache-core-2.6.11.jar;%APP_LIB_DIR%\com.typesafe.play.filters-helpers_2.11-2.4.0.jar;%APP_LIB_DIR%\com.typesafe.play.play-ws_2.11-2.4.0.jar;%APP_LIB_DIR%\com.google.guava.guava-18.0.jar;%APP_LIB_DIR%\com.ning.async-http-client-1.9.21.jar;%APP_LIB_DIR%\oauth.signpost.signpost-core-1.2.1.2.jar;%APP_LIB_DIR%\oauth.signpost.signpost-commonshttp4-1.2.1.2.jar;%APP_LIB_DIR%\org.apache.httpcomponents.httpcore-4.0.1.jar;%APP_LIB_DIR%\org.apache.httpcomponents.httpclient-4.2.5.jar;%APP_LIB_DIR%\commons-logging.commons-logging-1.1.1.jar;%APP_LIB_DIR%\org.slf4j.slf4j-log4j12-1.7.21.jar;%APP_LIB_DIR%\org.webjars.angularjs-1.4.8.jar;%APP_LIB_DIR%\org.webjars.angularjs-toaster-0.4.8.jar;%APP_LIB_DIR%\org.webjars.angular-highlightjs-0.4.3.jar;%APP_LIB_DIR%\org.webjars.highlightjs-8.7.jar;%APP_LIB_DIR%\org.webjars.angular-ui-bootstrap-0.14.3.jar;%APP_LIB_DIR%\org.webjars.bootstrap-3.3.6.jar;%APP_LIB_DIR%\org.webjars.jquery-2.1.3.jar;%APP_LIB_DIR%\org.webjars.angular-ui-router-0.2.13.jar;%APP_LIB_DIR%\org.webjars.font-awesome-4.5.0.jar;%APP_LIB_DIR%\org.webjars.nervgh-angular-file-upload-2.1.1.jar;%APP_LIB_DIR%\org.webjars.webjars-play_2.11-2.4.0-2.jar;%APP_LIB_DIR%\org.webjars.requirejs-2.1.20.jar;%APP_LIB_DIR%\org.webjars.webjars-locator-0.28.jar;%APP_LIB_DIR%\org.webjars.webjars-locator-core-0.27.jar;%APP_LIB_DIR%\org.apache.commons.commons-compress-1.9.jar;%APP_LIB_DIR%\org.webjars.npm.validate.js-0.8.0.jar;%APP_LIB_DIR%\com.github.ldaniels528.trifecta-ui-0.22.0rc8b-0.10.1.0-assets.jar"

to

set "APP_CLASSPATH=%APP_LIB_DIR%..\conf;%APP_LIB_DIR%*"

Trifecta UI: add download link for Avro schema files

It would be handy if there was a download link (or a link to the corresponding entry under the "Decoders" tab) for Avro schema files when inspecting topics that have been configured with Avro decoders. That is, a link that would take you from a Kafka topic to its configured decoder / Avro schema, if any.

Consumer offsets not displaying (missing partitions)

Kafka 0.10.0 on Ubuntu 14.04.04 LTS
Trifecta 0.20.0

Using kafka-python client I consume a topic with 9 partitions and then view in Trifecta > Inspect > Consumers > {topic} > {group} and it only displays offsets for a portion of the partitions. I expect to see all partitions.

screen shot 2016-07-01 at 3 01 18 pm

Using the CLI tool I perform the following and it confirms they were all consumed and no lag:

ubuntu@db1:~$ kafka-consumer-offset-checker --group etl-diffs --topic rets.nnrmls.Agent --zookeeper localhost:2181

Group           Topic                          Pid Offset          logSize         Lag             Owner
etl-diffs       rets.nnrmls.Agent              0   3175            3175            0               none
etl-diffs       rets.nnrmls.Agent              1   3691            3691            0               none
etl-diffs       rets.nnrmls.Agent              2   3352            3352            0               none
etl-diffs       rets.nnrmls.Agent              3   3432            3432            0               none
etl-diffs       rets.nnrmls.Agent              4   3111            3111            0               none
etl-diffs       rets.nnrmls.Agent              5   2856            2856            0               none
etl-diffs       rets.nnrmls.Agent              6   3247            3247            0               none
etl-diffs       rets.nnrmls.Agent              7   3690            3690            0               none
etl-diffs       rets.nnrmls.Agent              8   3072            3072            0               none

I upgraded to latest 0.21 and same issue it appears.

screen shot 2016-07-01 at 3 17 39 pm

"sbt assembly" fails due to dependency problem

at 1fc3462

I've faced build problem as follows:

[error] (*:update) sbt.ResolveException: unresolved dependency: com.ldaniels528#commons-helpers_2.11;0.1.0: not found
[error] unresolved dependency: com.ldaniels528#tabular_2.11;0.1.1: not found
[error] unresolved dependency: clj-time#clj-time;0.4.1: not found
[error] unresolved dependency: compojure#compojure;1.1.3: not found
[error] unresolved dependency: clout#clout;1.0.1: not found
[error] unresolved dependency: ring#ring-core;1.1.5: not found
[error] unresolved dependency: hiccup#hiccup;0.3.6: not found
[error] unresolved dependency: ring#ring-devel;0.3.11: not found
[error] unresolved dependency: clj-stacktrace#clj-stacktrace;0.2.2: not found
[error] unresolved dependency: ring#ring-jetty-adapter;0.3.11: not found
[error] unresolved dependency: ring#ring-servlet;0.3.11: not found
[error] unresolved dependency: com.twitter#carbonite;1.4.0: not found

I've found to solve part of these by recovering

resolvers += "Clojure Releases" at "http://build.clojure.org/releases/"

But, I couldn't find two artifacts:

"com.ldaniels528" %% "commons-helpers" % "0.1.0",
"com.ldaniels528" %% "tabular" % "0.1.1"

Where can I get them?

[UI] searching data with return 404

Hi,

I am not scala developer and use only UI.

I go to debug topics, and click "Find a message".
When I select the topic, whatever I will write in Criteria field - I always get a big pile of CSS on page and 404 in the console.

Is this a bug?
What should I write to field "Criteria"?

Support for plain text messages in KQL

Trifecta v0.22.0rc8b (0.9.0.1)

I have a partition with plain text messages (created with apache kafka own test command line producers). I can see them fine one by one in Observe tab.

However, if I try to run KQL on the topic, I get various decoding error messages for each row, for example:

select * from test1
Malformed JSON message

select * from test1 with text
Incompatible decoder type com.github.ldaniels528.trifecta.messages.codec.MessageCodecFactory$PlainTextCodec$

Resource '/app/index.htm' failed unexpectedly

In 0.19.0, I get the following error:

Resource '/app/index.htm' failed unexpectedly

4216 [main] INFO  com.github.ldaniels528.trifecta.TrifectaShell$ - Open your browser and navigate to http://0.0.0.0:8888
12791 [EmbeddedWebServer-akka.actor.default-dispatcher-4] ERROR com.github.ldaniels528.trifecta.rest.WebContentActor - Resource '/app/index.htm' failed unexpectedly WARNING arguments left: 1
13054 [EmbeddedWebServer-akka.actor.default-dispatcher-4] ERROR com.github.ldaniels528.trifecta.rest.WebContentActor - Resource '/app/favicon.ico' failed unexpectedly WARNING arguments left: 1
23713 [main-SendThread(prd-zookeeper00-03.gut.dglecom.net:2181)] WARN  org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from server in 20045ms for sessionid 0x0
31225 [EmbeddedWebServer-akka.actor.default-dispatcher-4] ERROR com.github.ldaniels528.trifecta.rest.WebContentActor - Resource '/app/index.htm' failed unexpectedly WARNING arguments left: 1
31472 [EmbeddedWebServer-akka.actor.default-dispatcher-3] ERROR com.github.ldaniels528.trifecta.rest.WebContentActor - Resource '/app/favicon.ico' failed unexpectedly WARNING arguments left: 1
43852 [main-SendThread(prd-zookeeper00-01.gut.dglecom.net:2181)] WARN  org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from server in 20032ms for sessionid 0x0

Support Avro IDL

Currently decoders must be .avsc files, but more complex schemas are usually written in .avdl files and should be supported by Trifecta.

Error in communication with broker

When I try to execute the jar file, getting following error

Trifecta v0.20.0
log4j:WARN No appenders could be found for logger (com.github.ldaniels528.trifecta.TrifectaShell$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$VxKafkaException: Error communicating with Broker [kafka03:9092] Reason: null
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$modules$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:655)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$modules$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:646)
at com.github.ldaniels528.commons.helpers.ResourceHelper$AutoClose$.use$extension(ResourceHelper.scala:16)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$.com$github$ldaniels528$trifecta$modules$kafka$KafkaMicroConsumer$$getTopicMetadata(KafkaMicroConsumer.scala:646)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:499)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:498)
at scala.Option.map(Option.scala:146)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$.getTopicList(KafkaMicroConsumer.scala:498)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaCliFacade.getTopics(KafkaCliFacade.scala:228)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaModule.getTopics(KafkaModule.scala:746)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaModule$$anonfun$getCommands$30.apply(KafkaModule.scala:111)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaModule$$anonfun$getCommands$30.apply(KafkaModule.scala:111)
at com.github.ldaniels528.trifecta.TxRuntimeContext$$anonfun$interpretCommandLine$1$$anonfun$apply$2.apply(TxRuntimeContext.scala:176)
at com.github.ldaniels528.trifecta.TxRuntimeContext$$anonfun$interpretCommandLine$1$$anonfun$apply$2.apply(TxRuntimeContext.scala:170)
at scala.Option.map(Option.scala:146)
at com.github.ldaniels528.trifecta.TxRuntimeContext$$anonfun$interpretCommandLine$1.apply(TxRuntimeContext.scala:170)
at scala.util.Try$.apply(Try.scala:192)
at com.github.ldaniels528.trifecta.TxRuntimeContext.interpretCommandLine(TxRuntimeContext.scala:156)
at com.github.ldaniels528.trifecta.TxRuntimeContext.interpret(TxRuntimeContext.scala:111)
at com.github.ldaniels528.trifecta.TrifectaShell$TrifectaConsole.execute(TrifectaShell.scala:90)
at com.github.ldaniels528.trifecta.TrifectaShell$.main(TrifectaShell.scala:70)
at com.github.ldaniels528.trifecta.TrifectaShell.main(TrifectaShell.scala)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:111)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$modules$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$20.apply(KafkaMicroConsumer.scala:649)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$modules$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$20.apply(KafkaMicroConsumer.scala:650)
at scala.util.Try$.apply(Try.scala:192)
at com.github.ldaniels528.trifecta.modules.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$modules$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:647)
... 21 more
Runtime error: Error communicating with Broker [kafka03:9092] Reason: null

In Observe panel ,some offsets display same content as offset before

In some topics, when displaying a message content on "Observe" panel, the same content is displayed on different offsets, although on Kafka I can see that there is no 2 messages with the same content.
Also when running "message streaming" feature those "duplicated" messages are skipped.
I'm using version 0.22.0rc8b

Avro warnings logged for optional fields (using a union type with null)

I think because Trifecta uses Avro 1.7.7 we see many of the following warnings being logged by Trifecta UI.

[WARNING] Avro: Invalid default for field foobar: "null" not a ["null","int"]

The field foobar was defined as follows in the Avro schema:

    {
      "name": "foobar",
      "type": [
        "null",
        "int"
      ],
      "default": "null"
    },

I think this may be related to a change in Avro that was introduced in version 1.7.7. See the discussion at Passively Converting Null Map To Be Valid in the avro-user mailing list from August 2014.

Do you know what needs to be changed to fix this warning?

REPL Console exists with error

Executed jar file and observed that REPL console exits with following error

java -jar "c:\Program Files\trifecta\trifecta_cli_0.20.0.bin.jar"
Trifecta v0.20.0
log4j:WARN No appenders could be found for logger (com.github.ldaniels528.trifecta.TrifectaShell$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Type 'help' (or '?') to see the list of available commands
Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class org.fusesource.jansi.internal.Kernel32
at org.fusesource.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50)
at scala.tools.jline.WindowsTerminal.getConsoleMode(WindowsTerminal.java:261)
at scala.tools.jline.WindowsTerminal.init(WindowsTerminal.java:96)
at scala.tools.jline.TerminalFactory.create(TerminalFactory.java:96)
at scala.tools.jline.TerminalFactory.get(TerminalFactory.java:154)
at scala.tools.jline.console.ConsoleReader.(ConsoleReader.java:87)
at scala.tools.jline.console.ConsoleReader.(ConsoleReader.java:134)
at com.github.ldaniels528.trifecta.TrifectaShell$TrifectaConsole.shell(TrifectaShell.scala:123)
at com.github.ldaniels528.trifecta.TrifectaShell$.main(TrifectaShell.scala:67)

at com.github.ldaniels528.trifecta.TrifectaShell.main(TrifectaShell.scala)

Following version of java is used
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) Client VM (build 25.91-b14, mixed mode)

Native consumer groups support in v0.22 RC's

Hi I could not force native consumer groups support in RC8 and RC6 (version 0.22):

  • trifecta-ui-0.22.0rc8b-0.10.1.0.zip
  • trifecta-ui-0.22.0rc6h-0.10.1.0.zip

After some investigation of source code it seems you drop support of Kafka native groups and check only for Zookeeper groups.

In version RC6 there was API exposing /api/consumers and /api/consumers/lite the lite version was working only with zookeeper groups and this one was used by UI. Although /api/consumers was not used was working if called manually.

As for version RC8 there seems to be only support for Zookeeper groups.

Will you release new version with native Kafka groups support ?

Some links for investigation:

Seems like you left the support of Kafka native groupst in your console client:

Any chance for an estimate when if at all it can be expected :) ?

High CPU cost and High Network TIME_WAIT in 0.18.20

Hi Daniels,

I'm trying the new version 0.18.20, trifecta_ui. But I found very high CPU cost and high network TIME_WAIT. I have 70+ topics. In the last version, the CPU load is less than 1, but the new version will reach to 4. And the TIME_WAIT reach 15,000 from 30. Is there any configuration I should update when upgrading?

Custom port not working

Hi,

As described here https://github.com/ldaniels528/trifecta#configuring-trifecta-ui I have defined a custom port 9010 in /data/mybox/.trifecta/config.properties since I have another application listening in 9000. Even after changing config, still I am getting error that 9000 is in use.

Config:

trifecta.elasticsearch.hosts=localhost
trifecta.common.columns=25
trifecta.storm.hosts=localhost
trifecta.common.autoSwitching=true
trifecta.common.encoding=UTF-8
trifecta.common.debugOn=false
trifecta.zookeeper.host=localhost:2181
trifecta.cassandra.hosts=localhost 
trifecta.web.host=localhost
trifecta.web.port=9010

Error;

[info] play.api.Play$ - Application started (Prod)
Oops, cannot start the server.
org.jboss.netty.channel.ChannelException: Failed to bind to: /0.0.0.0:9000
        at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
        at play.core.server.NettyServer$$anonfun$1.apply(NettyServer.scala:132)
        at play.core.server.NettyServer$$anonfun$1.apply(NettyServer.scala:129)
        at scala.Option.map(Option.scala:146)
        at play.core.server.NettyServer.<init>(NettyServer.scala:129)
        at play.core.server.NettyServerProvider.createServer(NettyServer.scala:200)
        at play.core.server.NettyServerProvider.createServer(NettyServer.scala:199)
        at play.core.server.ServerProvider$class.createServer(ServerProvider.scala:24)
        at play.core.server.NettyServerProvider.createServer(NettyServer.scala:199)
        at play.core.server.ProdServerStart$.start(ProdServerStart.scala:58)
        at play.core.server.ProdServerStart$.main(ProdServerStart.scala:27)
        at play.core.server.ProdServerStart.main(ProdServerStart.scala)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)
        at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

I am using trifecta-ui-0.22.0rc8b-0.10.1.0. How can I fix this?

Thanks.

Can't stop streaming. "No streaming session found"

Can't stop infinite streaming. It shows modal window with message "No streaming session found".
Have tried to clear all site data, cache, cookie, used incognito tab, different browsers, nothing helped. Any ideas how to solve this issue?

image
image (1)

Trifecta with two way SSL

My Kafka Brokers are using two way SSL. I am currently unable to configure Trifecta to two way SSL.

cli doesn't reset the terminal properly on exit

I'm using iTerm on a Mac.

When I enter 'exit' in the cli to return to bash the terminal is left in a funky state--specifically characters I type aren't displayed on the screen.

The term library you're using probably has some kind of reset feature for this.

Support for custom pluggable custom decoders

It would be great to be able to provide custom decoders which should implement some interface and put as a jar in a plugin derectory.

What i really want is google protobuffers support, but I guess ability to support anything through plugins is much better for everyone.

OutOfMemoryError

Hello Daniel,

I have deployed latest trifecta, and start the http server. But I encountered OOM error recently. Following is the log, could you help check what the root cause could be?
[INFO] [05/12/2015 10:37:23.955] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$d] Executed '/rest/getQueriesByTopic/__consumer_offsets' (2 bytes) [0.6 msec]
[INFO] [05/12/2015 10:37:23.958] [EmbeddedWebServer-akka.actor.default-dispatcher-18] [akka://EmbeddedWebServer/user/$e] Executed '/rest/getDecoderByTopic/__consumer_offsets' (43 bytes) [1.0 msec]
[INFO] [05/12/2015 10:37:23.961] [EmbeddedWebServer-akka.actor.default-dispatcher-18] [akka://EmbeddedWebServer/user/$c] Executed '/rest/getMessage/__consumer_offsets/0/0' (0 bytes) [7.0 msec]
[INFO] [05/12/2015 10:37:23.992] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/images/status/loading.gif' (1849 bytes) [0.5 msec]
[INFO] [05/12/2015 10:37:36.140] [EmbeddedWebServer-akka.actor.default-dispatcher-27] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/images/status/loading.gif' (1849 bytes) [0.8 msec]
[INFO] [05/12/2015 10:37:36.146] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$g] Executed '/rest/getReplicas/ubt_topic' (346 bytes) [15.1 msec]
[INFO] [05/12/2015 10:37:36.194] [EmbeddedWebServer-akka.actor.default-dispatcher-27] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/images/status/greenlight.png' (593 bytes) [0.9 msec]
scala.concurrent.impl.Promise$DefaultPromise@4b055e10
[INFO] [05/12/2015 10:37:40.337] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$j] Executed '/rest/getReplicas/UBT_TOPIC_Action' (750 bytes) [7.5 msec]
scala.concurrent.impl.Promise$DefaultPromise@427f1590
scala.concurrent.impl.Promise$DefaultPromise@1ab0b267
scala.concurrent.impl.Promise$DefaultPromise@5db3c177
scala.concurrent.impl.Promise$DefaultPromise@1b8c25c0
scala.concurrent.impl.Promise$DefaultPromise@19f8b008
scala.concurrent.impl.Promise$DefaultPromise@491f7520
scala.concurrent.impl.Promise$DefaultPromise@4c5a049c
scala.concurrent.impl.Promise$DefaultPromise@1858aa14
scala.concurrent.impl.Promise$DefaultPromise@78ffe030
scala.concurrent.impl.Promise$DefaultPromise@1f67e1f0
scala.concurrent.impl.Promise$DefaultPromise@591cfc2d
scala.concurrent.impl.Promise$DefaultPromise@326256f
scala.concurrent.impl.Promise$DefaultPromise@153112e8
scala.concurrent.impl.Promise$DefaultPromise@30b9141c
scala.concurrent.impl.Promise$DefaultPromise@75885ad3
scala.concurrent.impl.Promise$DefaultPromise@5c524be0
scala.concurrent.impl.Promise$DefaultPromise@1e21dfe1
scala.concurrent.impl.Promise$DefaultPromise@15ee7e61
scala.concurrent.impl.Promise$DefaultPromise@222f0c66
scala.concurrent.impl.Promise$DefaultPromise@5de6558f
scala.concurrent.impl.Promise$DefaultPromise@4425239c
scala.concurrent.impl.Promise$DefaultPromise@18c775bd
scala.concurrent.impl.Promise$DefaultPromise@26cfbef0
scala.concurrent.impl.Promise$DefaultPromise@7532a1d
scala.concurrent.impl.Promise$DefaultPromise@60ce24c2
scala.concurrent.impl.Promise$DefaultPromise@451a8466
scala.concurrent.impl.Promise$DefaultPromise@48f4d2c8
scala.concurrent.impl.Promise$DefaultPromise@5fa6296f
scala.concurrent.impl.Promise$DefaultPromise@504da873
scala.concurrent.impl.Promise$DefaultPromise@2d1e946f
scala.concurrent.impl.Promise$DefaultPromise@68f2efe4
scala.concurrent.impl.Promise$DefaultPromise@100d5b5
scala.concurrent.impl.Promise$DefaultPromise@16c54858
scala.concurrent.impl.Promise$DefaultPromise@2c010009
scala.concurrent.impl.Promise$DefaultPromise@600b527c
scala.concurrent.impl.Promise$DefaultPromise@23729ca0
scala.concurrent.impl.Promise$DefaultPromise@1df59c12
scala.concurrent.impl.Promise$DefaultPromise@2b027c27
scala.concurrent.impl.Promise$DefaultPromise@3d5f5474
scala.concurrent.impl.Promise$DefaultPromise@4ea77b1b
scala.concurrent.impl.Promise$DefaultPromise@5946fc71
scala.concurrent.impl.Promise$DefaultPromise@337004d5
scala.concurrent.impl.Promise$DefaultPromise@a306a17
scala.concurrent.impl.Promise$DefaultPromise@5fb1929a
scala.concurrent.impl.Promise$DefaultPromise@202b8cb3
scala.concurrent.impl.Promise$DefaultPromise@10c85d1f
scala.concurrent.impl.Promise$DefaultPromise@1b6afdbc
scala.concurrent.impl.Promise$DefaultPromise@4dc221ac
scala.concurrent.impl.Promise$DefaultPromise@60a8bcc1
scala.concurrent.impl.Promise$DefaultPromise@7dfae5b6
scala.concurrent.impl.Promise$DefaultPromise@68ce5701
scala.concurrent.impl.Promise$DefaultPromise@4ba0b841
scala.concurrent.impl.Promise$DefaultPromise@37f2ec9e
scala.concurrent.impl.Promise$DefaultPromise@5f2dcf8d
scala.concurrent.impl.Promise$DefaultPromise@82ce0e7
scala.concurrent.impl.Promise$DefaultPromise@7336c25
scala.concurrent.impl.Promise$DefaultPromise@4ebec7f5
scala.concurrent.impl.Promise$DefaultPromise@38fe4d15
scala.concurrent.impl.Promise$DefaultPromise@7645e7c2
scala.concurrent.impl.Promise$DefaultPromise@6b08543b
scala.concurrent.impl.Promise$DefaultPromise@3332d549
scala.concurrent.impl.Promise$DefaultPromise@20026067
scala.concurrent.impl.Promise$DefaultPromise@a17a546
scala.concurrent.impl.Promise$DefaultPromise@5a23545a
scala.concurrent.impl.Promise$DefaultPromise@26a493b4
scala.concurrent.impl.Promise$DefaultPromise@2f8eb7f8
scala.concurrent.impl.Promise$DefaultPromise@15fb1f13
scala.concurrent.impl.Promise$DefaultPromise@219c721c
scala.concurrent.impl.Promise$DefaultPromise@7fe0e61
scala.concurrent.impl.Promise$DefaultPromise@51f96252
scala.concurrent.impl.Promise$DefaultPromise@392a028f
scala.concurrent.impl.Promise$DefaultPromise@20df5f07
scala.concurrent.impl.Promise$DefaultPromise@2dec8e27
scala.concurrent.impl.Promise$DefaultPromise@78c5afd
scala.concurrent.impl.Promise$DefaultPromise@7e703b98
scala.concurrent.impl.Promise$DefaultPromise@43d3760f
scala.concurrent.impl.Promise$DefaultPromise@306d2cc3
scala.concurrent.impl.Promise$DefaultPromise@57d5e568
scala.concurrent.impl.Promise$DefaultPromise@3e64b7ab
scala.concurrent.impl.Promise$DefaultPromise@2bd14f6a
scala.concurrent.impl.Promise$DefaultPromise@72e1aefa
scala.concurrent.impl.Promise$DefaultPromise@b4a151b
scala.concurrent.impl.Promise$DefaultPromise@4a746a6c
scala.concurrent.impl.Promise$DefaultPromise@3cffb0c2
scala.concurrent.impl.Promise$DefaultPromise@8fe90df
scala.concurrent.impl.Promise$DefaultPromise@7a96a3a8
scala.concurrent.impl.Promise$DefaultPromise@188e6cdc
scala.concurrent.impl.Promise$DefaultPromise@5dec5173
scala.concurrent.impl.Promise$DefaultPromise@650ab023
scala.concurrent.impl.Promise$DefaultPromise@3556f201
scala.concurrent.impl.Promise$DefaultPromise@1bd3e36b
scala.concurrent.impl.Promise$DefaultPromise@3f2a98ec
scala.concurrent.impl.Promise$DefaultPromise@5918c33
scala.concurrent.impl.Promise$DefaultPromise@40e4896f
scala.concurrent.impl.Promise$DefaultPromise@718984c8
scala.concurrent.impl.Promise$DefaultPromise@38665ee7
scala.concurrent.impl.Promise$DefaultPromise@4bdd2d92
scala.concurrent.impl.Promise$DefaultPromise@19e5763c
scala.concurrent.impl.Promise$DefaultPromise@da459c8
scala.concurrent.impl.Promise$DefaultPromise@1123ffe
scala.concurrent.impl.Promise$DefaultPromise@592596f2
scala.concurrent.impl.Promise$DefaultPromise@4fe3b013
scala.concurrent.impl.Promise$DefaultPromise@f939186
scala.concurrent.impl.Promise$DefaultPromise@4d7f28ee
scala.concurrent.impl.Promise$DefaultPromise@54412dd6
scala.concurrent.impl.Promise$DefaultPromise@246e9059
scala.concurrent.impl.Promise$DefaultPromise@38d7ffb5
scala.concurrent.impl.Promise$DefaultPromise@45973011
scala.concurrent.impl.Promise$DefaultPromise@123b11d2
scala.concurrent.impl.Promise$DefaultPromise@8399898
scala.concurrent.impl.Promise$DefaultPromise@59634ae8
scala.concurrent.impl.Promise$DefaultPromise@266fc58e
scala.concurrent.impl.Promise$DefaultPromise@45bc7927
scala.concurrent.impl.Promise$DefaultPromise@71a88394
scala.concurrent.impl.Promise$DefaultPromise@3a00ad7c
scala.concurrent.impl.Promise$DefaultPromise@ce3a4ad
scala.concurrent.impl.Promise$DefaultPromise@f778580
scala.concurrent.impl.Promise$DefaultPromise@4f6ae561
scala.concurrent.impl.Promise$DefaultPromise@42ef9559
scala.concurrent.impl.Promise$DefaultPromise@78ab8ce2
scala.concurrent.impl.Promise$DefaultPromise@2040c7d9
scala.concurrent.impl.Promise$DefaultPromise@1265604
scala.concurrent.impl.Promise$DefaultPromise@5396bee6
scala.concurrent.impl.Promise$DefaultPromise@18360662
scala.concurrent.impl.Promise$DefaultPromise@4374bc49
scala.concurrent.impl.Promise$DefaultPromise@78c9cffa
scala.concurrent.impl.Promise$DefaultPromise@5288c52a
scala.concurrent.impl.Promise$DefaultPromise@45127d5b
scala.concurrent.impl.Promise$DefaultPromise@73c1b021
scala.concurrent.impl.Promise$DefaultPromise@4f755864
scala.concurrent.impl.Promise$DefaultPromise@424fd79f
scala.concurrent.impl.Promise$DefaultPromise@76b87ecf
scala.concurrent.impl.Promise$DefaultPromise@69de238
scala.concurrent.impl.Promise$DefaultPromise@17295ba0
scala.concurrent.impl.Promise$DefaultPromise@6339fce0
scala.concurrent.impl.Promise$DefaultPromise@606730e0
scala.concurrent.impl.Promise$DefaultPromise@7c163e10
scala.concurrent.impl.Promise$DefaultPromise@58c0401e
scala.concurrent.impl.Promise$DefaultPromise@1f4645de
scala.concurrent.impl.Promise$DefaultPromise@1c1f4ace
scala.concurrent.impl.Promise$DefaultPromise@47352bad
scala.concurrent.impl.Promise$DefaultPromise@4cbb3377
scala.concurrent.impl.Promise$DefaultPromise@4a811dc9
scala.concurrent.impl.Promise$DefaultPromise@4631f2b5
scala.concurrent.impl.Promise$DefaultPromise@74b72425
scala.concurrent.impl.Promise$DefaultPromise@3ff836e8
scala.concurrent.impl.Promise$DefaultPromise@4ca6fc0c
scala.concurrent.impl.Promise$DefaultPromise@76ef8fac
scala.concurrent.impl.Promise$DefaultPromise@2edaa5bc
scala.concurrent.impl.Promise$DefaultPromise@419df61e
scala.concurrent.impl.Promise$DefaultPromise@353da295
scala.concurrent.impl.Promise$DefaultPromise@70fbac4d
scala.concurrent.impl.Promise$DefaultPromise@74fc9215
scala.concurrent.impl.Promise$DefaultPromise@6b8e2c08
scala.concurrent.impl.Promise$DefaultPromise@2e3ba04a
scala.concurrent.impl.Promise$DefaultPromise@379253ea
scala.concurrent.impl.Promise$DefaultPromise@2d721694
scala.concurrent.impl.Promise$DefaultPromise@7859d9fe
scala.concurrent.impl.Promise$DefaultPromise@369d9489
scala.concurrent.impl.Promise$DefaultPromise@c75e921
scala.concurrent.impl.Promise$DefaultPromise@50ac26a8
scala.concurrent.impl.Promise$DefaultPromise@7f93671a
scala.concurrent.impl.Promise$DefaultPromise@58203b36
scala.concurrent.impl.Promise$DefaultPromise@3af96e5b
scala.concurrent.impl.Promise$DefaultPromise@ad3dd9f
scala.concurrent.impl.Promise$DefaultPromise@562d4f90
scala.concurrent.impl.Promise$DefaultPromise@4c45536a
scala.concurrent.impl.Promise$DefaultPromise@18ae3638
scala.concurrent.impl.Promise$DefaultPromise@7e7bf7ae
scala.concurrent.impl.Promise$DefaultPromise@24e51d9
scala.concurrent.impl.Promise$DefaultPromise@12844043
scala.concurrent.impl.Promise$DefaultPromise@60fd5df
scala.concurrent.impl.Promise$DefaultPromise@4fa306dd
scala.concurrent.impl.Promise$DefaultPromise@30c7f01b
scala.concurrent.impl.Promise$DefaultPromise@7cb6ea15
scala.concurrent.impl.Promise$DefaultPromise@3f1269
scala.concurrent.impl.Promise$DefaultPromise@505ba35c
scala.concurrent.impl.Promise$DefaultPromise@64e7405
scala.concurrent.impl.Promise$DefaultPromise@604c2da7
scala.concurrent.impl.Promise$DefaultPromise@1c59d241
scala.concurrent.impl.Promise$DefaultPromise@2503c96a
scala.concurrent.impl.Promise$DefaultPromise@466e3d10
scala.concurrent.impl.Promise$DefaultPromise@4ee38db6
scala.concurrent.impl.Promise$DefaultPromise@4a828ed4
scala.concurrent.impl.Promise$DefaultPromise@1eb87a0f
scala.concurrent.impl.Promise$DefaultPromise@403448f0
scala.concurrent.impl.Promise$DefaultPromise@3571225a
scala.concurrent.impl.Promise$DefaultPromise@648c245
scala.concurrent.impl.Promise$DefaultPromise@72724819
scala.concurrent.impl.Promise$DefaultPromise@244df0f3
scala.concurrent.impl.Promise$DefaultPromise@567ad500
scala.concurrent.impl.Promise$DefaultPromise@ca9c601
scala.concurrent.impl.Promise$DefaultPromise@3d9e7352
scala.concurrent.impl.Promise$DefaultPromise@f4abc9e
scala.concurrent.impl.Promise$DefaultPromise@1d0e3566
scala.concurrent.impl.Promise$DefaultPromise@755eabdc
scala.concurrent.impl.Promise$DefaultPromise@6de7326e
scala.concurrent.impl.Promise$DefaultPromise@36f8661a
scala.concurrent.impl.Promise$DefaultPromise@6cff1cff
scala.concurrent.impl.Promise$DefaultPromise@234bf04b
scala.concurrent.impl.Promise$DefaultPromise@3b4afaae
scala.concurrent.impl.Promise$DefaultPromise@57316973
scala.concurrent.impl.Promise$DefaultPromise@29ce2fe7
scala.concurrent.impl.Promise$DefaultPromise@968053b
scala.concurrent.impl.Promise$DefaultPromise@3068ce6f
scala.concurrent.impl.Promise$DefaultPromise@246e14de
scala.concurrent.impl.Promise$DefaultPromise@63a9382b
[INFO] [05/12/2015 11:29:14.395] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$a] Executed '/rest/getConsumersByTopic/UBT_TOPIC_Action' (2 bytes) [201.7 msec]
scala.concurrent.impl.Promise$DefaultPromise@6735d9a8
2015-05-12 11:29:31 INFO EmbeddedWebServer:89 - Web Socket ff3ed589-46fa-4e18-a25f-914893fe7da3-314390067 closed
[INFO] [05/12/2015 11:40:50.855] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/index.htm' (6879 bytes) [3.7 msec]
[INFO] [05/12/2015 11:40:50.944] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/lib/css/bootstrap.min.css' (113498 bytes) [4.6 msec]
[INFO] [05/12/2015 11:40:50.975] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/lib/js/jquery-2.1.1.min.js' (84245 bytes) [2.7 msec]
[INFO] [05/12/2015 11:40:50.979] [EmbeddedWebServer-akka.actor.default-dispatcher-27] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/lib/css/tomorrow.min.css' (1046 bytes) [0.8 msec]
[INFO] [05/12/2015 11:40:50.980] [EmbeddedWebServer-akka.actor.default-dispatcher-27] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/lib/js/angular-resource.min.js' (3581 bytes) [2.0 msec]
[INFO] [05/12/2015 11:40:50.984] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/lib/js/highlight.min.js' (30174 bytes) [1.5 msec]
[INFO] [05/12/2015 11:40:50.984] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/lib/js/angular.min.js' (124223 bytes) [6.7 msec]
[INFO] [05/12/2015 11:40:50.990] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/lib/js/angular-highlightjs.min.js' (3017 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.007] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$j] Retrieved '/app/lib/js/bootstrap.min.js' (35601 bytes) [1.3 msec]
[INFO] [05/12/2015 11:40:51.010] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$a] Retrieved '/app/lib/js/ui-bootstrap-tpls.min.js' (64837 bytes) [2.8 msec]
[INFO] [05/12/2015 11:40:51.020] [EmbeddedWebServer-akka.actor.default-dispatcher-30] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/css/decoders.css' (1446 bytes) [1.5 msec]
[INFO] [05/12/2015 11:40:51.021] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/css/index.css' (2470 bytes) [1.5 msec]
[INFO] [05/12/2015 11:40:51.024] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/css/inspect.css' (1069 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.039] [EmbeddedWebServer-akka.actor.default-dispatcher-30] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/css/observe.css' (488 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.047] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/css/publish.css' (1067 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.048] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/css/query.css' (1386 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.051] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/js/services/BrokerSvc.js' (713 bytes) [1.7 msec]
[INFO] [05/12/2015 11:40:51.051] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/js/TrifectaApp.js' (3371 bytes) [2.4 msec]
[INFO] [05/12/2015 11:40:51.058] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$j] Retrieved '/app/js/services/ConsumerSvc.js' (1010 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.063] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$a] Retrieved '/app/js/services/DecoderSvc.js' (3447 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.075] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/js/services/MessageSvc.js' (1795 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.076] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/js/services/TopicSvc.js' (8418 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.077] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/js/services/QuerySvc.js' (2193 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.078] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/js/services/WebSocketsSvc.js' (3543 bytes) [0.3 msec]
[INFO] [05/12/2015 11:40:51.083] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/js/services/ZookeeperSvc.js' (1013 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.088] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/js/controllers/DecoderCtrl.js' (13216 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.100] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/js/controllers/InspectCtrl.js' (7761 bytes) [0.8 msec]
[INFO] [05/12/2015 11:40:51.106] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$j] Retrieved '/app/js/controllers/PublishCtrl.js' (3437 bytes) [1.3 msec]
[INFO] [05/12/2015 11:40:51.107] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$a] Retrieved '/app/js/controllers/QueryCtrl.js' (12903 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.109] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/js/dialogs/MessageSearchDialog.js' (2443 bytes) [0.7 msec]
[INFO] [05/12/2015 11:40:51.111] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/js/controllers/ObserveCtrl.js' (22121 bytes) [7.1 msec]
[INFO] [05/12/2015 11:40:51.142] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/images/buttons/delete.png' (3188 bytes) [0.5 msec]
scala.concurrent.impl.Promise$DefaultPromise@16b3e1b4
[INFO] [05/12/2015 11:40:51.484] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$e] Executed '/rest/getBrokerDetails' (655 bytes) [20.2 msec]
[INFO] [05/12/2015 11:40:51.487] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/views/decoders.htm' (4806 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.488] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/views/inspect.htm' (22515 bytes) [0.9 msec]
[INFO] [05/12/2015 11:40:51.491] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/views/observe.htm' (7246 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.510] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$j] Retrieved '/app/views/publish.htm' (3633 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.514] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$a] Retrieved '/app/views/query.htm' (9143 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.517] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/images/tabs/main/inspect-24.png' (455 bytes) [0.3 msec]
[INFO] [05/12/2015 11:40:51.519] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/images/tabs/main/observe-24.png' (1133 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.538] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/images/tabs/main/publish-24.png' (794 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.539] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/images/tabs/main/query-24.png' (499 bytes) [0.3 msec]
[INFO] [05/12/2015 11:40:51.543] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/images/tabs/main/decoders-24.png' (420 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.549] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/images/tabs/decoders/add_schema-16.png' (312 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.564] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/images/common/bulb-16.png' (638 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.565] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/images/tabs/decoders/edit-20.png' (864 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.570] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$j] Retrieved '/app/images/tabs/decoders/cancel-20.png' (679 bytes) [0.8 msec]
[INFO] [05/12/2015 11:40:51.574] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$a] Retrieved '/app/images/tabs/decoders/save-20.png' (728 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.590] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/views/partials/hide_show_empty_topics.htm' (543 bytes) [1.6 msec]
[INFO] [05/12/2015 11:40:51.591] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/views/partials/hints_and_tips.htm' (50 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.595] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/images/tabs/decoders/download-20.png' (1235 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.602] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/images/status/yellowlight.gif' (579 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.616] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/images/common/consumers-16.png' (285 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.618] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/images/status/processing.gif' (1780 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.619] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/images/tabs/inspect/arrow_topic.gif' (200 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.627] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/images/tabs/inspect/arrow_consumer.gif' (206 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.642] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$j] Retrieved '/app/images/status/online-16.png' (283 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.644] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$a] Retrieved '/app/images/status/offline-16.png' (344 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.656] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$c] Executed '/rest/getZkPath/' (429 bytes) [2.7 msec]
[INFO] [05/12/2015 11:40:51.659] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$b] Executed '/rest/getZkInfo/' (91 bytes) [6.3 msec]
[INFO] [05/12/2015 11:40:51.668] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$f] Executed '/rest/getConsumerDetails' (2496 bytes) [203.2 msec]
[INFO] [05/12/2015 11:40:51.674] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/images/buttons/expand.png' (507 bytes) [1.9 msec]
[INFO] [05/12/2015 11:40:51.681] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/views/partials/web_socket_refresh.htm' (724 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.686] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/images/common/root-16.png' (808 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.694] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/images/tabs/inspect/brokers.png' (783 bytes) [2.0 msec]
[INFO] [05/12/2015 11:40:51.697] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/images/tabs/inspect/consumers.png' (3244 bytes) [1.6 msec]
[INFO] [05/12/2015 11:40:51.700] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/images/tabs/inspect/topics.png' (1089 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.704] [EmbeddedWebServer-akka.actor.default-dispatcher-22] [akka://EmbeddedWebServer/user/$j] Retrieved '/app/images/tabs/inspect/replicas-24.png' (660 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.711] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$a] Retrieved '/app/images/tabs/inspect/zookeeper.png' (1111 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.719] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/images/status/loading.gif' (1849 bytes) [0.7 msec]
[INFO] [05/12/2015 11:40:51.724] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/images/tabs/observe/search-16.png' (1167 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.724] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/images/tabs/observe/decoder-16.png' (280 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.730] [EmbeddedWebServer-akka.actor.default-dispatcher-20] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/images/tabs/observe/pause-16.png' (399 bytes) [0.6 msec]
[INFO] [05/12/2015 11:40:51.737] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/images/tabs/observe/play-16.png' (673 bytes) [0.3 msec]
[INFO] [05/12/2015 11:40:51.746] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/images/tabs/observe/message-first.gif' (925 bytes) [1.1 msec]
[INFO] [05/12/2015 11:40:51.749] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/images/tabs/observe/message-prev.gif' (879 bytes) [0.9 msec]
[INFO] [05/12/2015 11:40:51.750] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/images/tabs/observe/message-next.gif' (875 bytes) [0.4 msec]
[INFO] [05/12/2015 11:40:51.755] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$j] Retrieved '/app/images/tabs/observe/message-last.gif' (923 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.765] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$a] Retrieved '/app/images/tabs/query/add_query-16.png' (312 bytes) [0.7 msec]
[INFO] [05/12/2015 11:40:51.775] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/images/status/transparent-16.png' (140 bytes) [3.4 msec]
[INFO] [05/12/2015 11:40:51.777] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/images/tabs/query/remove-16.gif' (409 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.778] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/images/tabs/query/download-16.png' (294 bytes) [1.1 msec]
[INFO] [05/12/2015 11:40:51.784] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/images/tabs/query/save-24.png' (941 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.790] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$f] Retrieved '/app/images/tabs/query/download-24.png' (200 bytes) [0.5 msec]
[INFO] [05/12/2015 11:40:51.801] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$g] Retrieved '/app/images/tabs/query/clock-16.png' (835 bytes) [0.7 msec]
[INFO] [05/12/2015 11:40:51.802] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$h] Retrieved '/app/images/tabs/query/run-24.gif' (1179 bytes) [0.5 msec]
2015-05-12 11:40:51 INFO EmbeddedWebServer:39 - Authorizing web socket handshake...
2015-05-12 11:40:51 INFO EmbeddedWebServer:83 - Web Socket 162141c1-cf0d-45fc-8bb9-39264265060b-116259730 connected
[INFO] [05/12/2015 11:40:51.819] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$i] Retrieved '/app/images/buttons/collapse.png' (432 bytes) [0.4 msec]
[WARN] [05/12/2015 11:40:51.930] [EmbeddedWebServer-akka.actor.default-dispatcher-29] [akka://EmbeddedWebServer/user/$j] No MIME type found for '/app/favicon.ico'
[ERROR] [05/12/2015 11:40:51.930] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$j] Resource '/app/favicon.ico' failed unexpectedly WARNING arguments left: 1
[WARN] [05/12/2015 11:40:51.955] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$a] No MIME type found for '/app/favicon.ico'
[ERROR] [05/12/2015 11:40:51.955] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$a] Resource '/app/favicon.ico' failed unexpectedly WARNING arguments left: 1
[INFO] [05/12/2015 11:40:52.104] [EmbeddedWebServer-akka.actor.default-dispatcher-8] [akka://EmbeddedWebServer/user/$d] Executed '/rest/getTopicSummaries' (27910 bytes) [639.3 msec]
[INFO] [05/12/2015 11:40:52.158] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$b] Retrieved '/app/images/status/loading.gif' (1849 bytes) [1.3 msec]
[INFO] [05/12/2015 11:40:52.162] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$c] Retrieved '/app/images/common/topic_selected-16.png' (758 bytes) [0.8 msec]
[INFO] [05/12/2015 11:40:52.164] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$d] Retrieved '/app/images/common/topic-16.png' (689 bytes) [0.7 msec]
[INFO] [05/12/2015 11:40:52.221] [EmbeddedWebServer-akka.actor.default-dispatcher-24] [akka://EmbeddedWebServer/user/$e] Retrieved '/app/images/common/topic_alert-16.png' (775 bytes) [0.7 msec]
[INFO] [05/12/2015 11:40:52.475] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$g] Executed '/rest/getMessage/__consumer_offsets/0/0' (0 bytes) [9.2 msec]
[INFO] [05/12/2015 11:40:52.478] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$f] Executed '/rest/getDecoderByTopic/__consumer_offsets' (43 bytes) [12.7 msec]
[INFO] [05/12/2015 11:40:52.490] [EmbeddedWebServer-akka.actor.default-dispatcher-28] [akka://EmbeddedWebServer/user/$h] Executed '/rest/getQueriesByTopic/__consumer_offsets' (2 bytes) [24.0 msec]
scala.concurrent.impl.Promise$DefaultPromise@4e168532
scala.concurrent.impl.Promise$DefaultPromise@7a63d684
scala.concurrent.impl.Promise$DefaultPromise@6659f1ee
scala.concurrent.impl.Promise$DefaultPromise@26a49eb1
scala.concurrent.impl.Promise$DefaultPromise@c6fc1ab
scala.concurrent.impl.Promise$DefaultPromise@5d3cb730
scala.concurrent.impl.Promise$DefaultPromise@40ac263e
scala.concurrent.impl.Promise$DefaultPromise@56383bbc
scala.concurrent.impl.Promise$DefaultPromise@7c5293fc
scala.concurrent.impl.Promise$DefaultPromise@c02bc37
scala.concurrent.impl.Promise$DefaultPromise@7579e00d
scala.concurrent.impl.Promise$DefaultPromise@2c741c31
scala.concurrent.impl.Promise$DefaultPromise@4bb475d1
scala.concurrent.impl.Promise$DefaultPromise@5ae1bf66
scala.concurrent.impl.Promise$DefaultPromise@521dbb10
scala.concurrent.impl.Promise$DefaultPromise@c3b2cac
scala.concurrent.impl.Promise$DefaultPromise@151f6a33
scala.concurrent.impl.Promise$DefaultPromise@3442d4bb
scala.concurrent.impl.Promise$DefaultPromise@32f27d4f
scala.concurrent.impl.Promise$DefaultPromise@49ca8009
scala.concurrent.impl.Promise$DefaultPromise@49377f30
scala.concurrent.impl.Promise$DefaultPromise@3178e70b
scala.concurrent.impl.Promise$DefaultPromise@2f096210
scala.concurrent.impl.Promise$DefaultPromise@1debdcc7
scala.concurrent.impl.Promise$DefaultPromise@68e86f3a
scala.concurrent.impl.Promise$DefaultPromise@70d10643
scala.concurrent.impl.Promise$DefaultPromise@3f41599b
scala.concurrent.impl.Promise$DefaultPromise@5bbf74c3
scala.concurrent.impl.Promise$DefaultPromise@595164a4
scala.concurrent.impl.Promise$DefaultPromise@1b4f73a5
scala.concurrent.impl.Promise$DefaultPromise@3c93a9ce
scala.concurrent.impl.Promise$DefaultPromise@2f6fc19e
scala.concurrent.impl.Promise$DefaultPromise@56d3151c
scala.concurrent.impl.Promise$DefaultPromise@317df6d1
scala.concurrent.impl.Promise$DefaultPromise@46740336
scala.concurrent.impl.Promise$DefaultPromise@66914706
scala.concurrent.impl.Promise$DefaultPromise@6a3d4013
scala.concurrent.impl.Promise$DefaultPromise@542da44f
scala.concurrent.impl.Promise$DefaultPromise@66385805
scala.concurrent.impl.Promise$DefaultPromise@1527bb28
scala.concurrent.impl.Promise$DefaultPromise@5f1542b8
scala.concurrent.impl.Promise$DefaultPromise@7e8662ea
scala.concurrent.impl.Promise$DefaultPromise@110db592
scala.concurrent.impl.Promise$DefaultPromise@58426cff
scala.concurrent.impl.Promise$DefaultPromise@5467f65
scala.concurrent.impl.Promise$DefaultPromise@5a8922dc
scala.concurrent.impl.Promise$DefaultPromise@1e27fae8
scala.concurrent.impl.Promise$DefaultPromise@560633e4
scala.concurrent.impl.Promise$DefaultPromise@2f7a2df
scala.concurrent.impl.Promise$DefaultPromise@96181c2
scala.concurrent.impl.Promise$DefaultPromise@580ff250
scala.concurrent.impl.Promise$DefaultPromise@57057740
scala.concurrent.impl.Promise$DefaultPromise@31909a7f
scala.concurrent.impl.Promise$DefaultPromise@7c54a572
scala.concurrent.impl.Promise$DefaultPromise@6a9c9673
scala.concurrent.impl.Promise$DefaultPromise@60c0c3c9
scala.concurrent.impl.Promise$DefaultPromise@7f6d5ff7
scala.concurrent.impl.Promise$DefaultPromise@647db100
scala.concurrent.impl.Promise$DefaultPromise@25bdc12a
scala.concurrent.impl.Promise$DefaultPromise@697a0009
scala.concurrent.impl.Promise$DefaultPromise@709ddbdf
scala.concurrent.impl.Promise$DefaultPromise@990808
scala.concurrent.impl.Promise$DefaultPromise@383180ca
scala.concurrent.impl.Promise$DefaultPromise@34720d09
scala.concurrent.impl.Promise$DefaultPromise@6e508b74
scala.concurrent.impl.Promise$DefaultPromise@7bfd7d1c
scala.concurrent.impl.Promise$DefaultPromise@2ea071a9
scala.concurrent.impl.Promise$DefaultPromise@494f436a
scala.concurrent.impl.Promise$DefaultPromise@e7f1519
scala.concurrent.impl.Promise$DefaultPromise@528ed3b
scala.concurrent.impl.Promise$DefaultPromise@76076158
scala.concurrent.impl.Promise$DefaultPromise@391d1150
scala.concurrent.impl.Promise$DefaultPromise@711e311c
scala.concurrent.impl.Promise$DefaultPromise@60a3ca4d
scala.concurrent.impl.Promise$DefaultPromise@3f4e49c6
scala.concurrent.impl.Promise$DefaultPromise@36abe852
scala.concurrent.impl.Promise$DefaultPromise@5e99afa3
scala.concurrent.impl.Promise$DefaultPromise@439832b3
scala.concurrent.impl.Promise$DefaultPromise@456567a7
scala.concurrent.impl.Promise$DefaultPromise@4f61cead
scala.concurrent.impl.Promise$DefaultPromise@4afb9f8e
scala.concurrent.impl.Promise$DefaultPromise@7d65d483
scala.concurrent.impl.Promise$DefaultPromise@32b0a192
scala.concurrent.impl.Promise$DefaultPromise@56cb3070
scala.concurrent.impl.Promise$DefaultPromise@53e06b18
scala.concurrent.impl.Promise$DefaultPromise@619a1a0c
scala.concurrent.impl.Promise$DefaultPromise@11a3cc6c
scala.concurrent.impl.Promise$DefaultPromise@3a699444
scala.concurrent.impl.Promise$DefaultPromise@a5202de
scala.concurrent.impl.Promise$DefaultPromise@69ac9e71
scala.concurrent.impl.Promise$DefaultPromise@9c4be01
scala.concurrent.impl.Promise$DefaultPromise@7deeca75
scala.concurrent.impl.Promise$DefaultPromise@6e3ec2fa
scala.concurrent.impl.Promise$DefaultPromise@3ab2bd08
scala.concurrent.impl.Promise$DefaultPromise@4cef74eb
scala.concurrent.impl.Promise$DefaultPromise@6eaa4b4d
scala.concurrent.impl.Promise$DefaultPromise@1b8240b3
scala.concurrent.impl.Promise$DefaultPromise@7bf97de9
scala.concurrent.impl.Promise$DefaultPromise@110629f6
scala.concurrent.impl.Promise$DefaultPromise@5a6755c9
scala.concurrent.impl.Promise$DefaultPromise@43319fa8
scala.concurrent.impl.Promise$DefaultPromise@67381c93
scala.concurrent.impl.Promise$DefaultPromise@2ccaa1c4
scala.concurrent.impl.Promise$DefaultPromise@6c6a789d
scala.concurrent.impl.Promise$DefaultPromise@7ce61b56
scala.concurrent.impl.Promise$DefaultPromise@11dcfa20
scala.concurrent.impl.Promise$DefaultPromise@91f814f
scala.concurrent.impl.Promise$DefaultPromise@69b605
Uncaught error from thread [EmbeddedWebServer-akka.actor.default-dispatcher-34] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[EmbeddedWebServer]
scala.concurrent.impl.Promise$DefaultPromise@17862339
java.lang.OutOfMemoryError: GC overhead limit exceeded
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.growArray(ForkJoinPool.java:1090)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1978)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
scala.concurrent.impl.Promise$DefaultPromise@30963aa4
Uncaught error from thread [EmbeddedWebServer-scheduler-1] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[EmbeddedWebServer]
java.lang.OutOfMemoryError: GC overhead limit exceeded
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:409)
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:744)
scala.concurrent.impl.Promise$DefaultPromise@204c8720
Uncaught error from thread [EmbeddedWebServer-akka.actor.default-dispatcher-38] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[EmbeddedWebServer]
java.lang.OutOfMemoryError: GC overhead limit exceeded
scala.concurrent.impl.Promise$DefaultPromise@66ce3a62
Exception in thread "Thread-2" java.lang.OutOfMemoryError: GC overhead limit exceeded
scala.concurrent.impl.Promise$DefaultPromise@77640e45
2015-05-12 12:09:08 INFO EmbeddedWebServer:89 - Web Socket 162141c1-cf0d-45fc-8bb9-39264265060b-116259730 closed
2015-05-12 12:09:10 ERROR BoundedByteBufferReceive:103 - OOME with size 2101
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
at kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:91)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getPartitionMetadata$1$$anonfun$16.apply(KafkaMicroConsumer.scala:571)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getPartitionMetadata$1$$anonfun$16.apply(KafkaMicroConsumer.scala:573)
at scala.util.Try$.apply(Try.scala:191)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getPartitionMetadata$1.apply(KafkaMicroConsumer.scala:569)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getPartitionMetadata$1.apply(KafkaMicroConsumer.scala:568)
at com.ldaniels528.trifecta.util.ResourceHelper$AutoClose$.use$extension(ResourceHelper.scala:16)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getPartitionMetadata(KafkaMicroConsumer.scala:568)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getLeaderPartitionMetaDataAndReplicas$1$$anonfun$apply$46.apply(KafkaMicroConsumer.scala:558)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getLeaderPartitionMetaDataAndReplicas$1$$anonfun$apply$46.apply(KafkaMicroConsumer.scala:558)
at com.ldaniels528.trifecta.util.OptionHelper$OptionalExtensions$.$qmark$qmark$extension(OptionHelper.scala:23)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getLeaderPartitionMetaDataAndReplicas$1.apply(KafkaMicroConsumer.scala:558)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getLeaderPartitionMetaDataAndReplicas$1.apply(KafkaMicroConsumer.scala:557)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.com$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getLeaderPartitionMetaDataAndReplicas(KafkaMicroConsumer.scala:557)
at com.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer.(KafkaMicroConsumer.scala:29)
at com.ldaniels528.trifecta.rest.KafkaRestFacade.getLastOffset(KafkaRestFacade.scala:353)
at com.ldaniels528.trifecta.rest.KafkaRestFacade$$anonfun$getTopicDeltas$1$$anonfun$9$$anonfun$apply$56.apply(KafkaRestFacade.scala:469)
at com.ldaniels528.trifecta.rest.KafkaRestFacade$$anonfun$getTopicDeltas$1$$anonfun$9$$anonfun$apply$56.apply(KafkaRestFacade.scala:468)
at scala.Option.flatMap(Option.scala:171)
at com.ldaniels528.trifecta.rest.KafkaRestFacade$$anonfun$getTopicDeltas$1$$anonfun$9.apply(KafkaRestFacade.scala:468)
2015-05-12 12:09:36 INFO SimpleConsumer:68 - Reconnect due to socket error: java.lang.OutOfMemoryError: GC overhead limit exceeded
2015-05-12 12:09:26 INFO WebServer:198 - Socko server 'WebServer' stopped

Read consumer offset from Kafka directly

Hi Lawrence,

Currently, trifecta reads consumer offset from zookeeper path /consumers/[group_id]/offsets/[topic]/[broker_id-partition_id]. From 0.8.2, kafka support store offsets in kafka by enabling "offsets.storage=kafka". Does trifecta have the plan to support this feature?

Thanks.

Trifecta Console unable to connect to Kafka enable with Kerberos

I am trying to connect to Kafka 0.10 with trifecta-ui-0.22.0rc8b-0.10.1.0. It does work with enabling SSL & SASL over Kerberos, but once I enable SASL over kerberos in Kafka Trifecta unable to connect.

[info] application - GET /api/consumers => 200 [101.9 ms]
[info] application - GET /api/topics/details => 500 [106.7 ms]
[info] application - GET /api/brokers/grouped => 200 [389.8 ms]
[info] application - GET /api/zookeeper/path?path=/ => 200 [94.3 ms]
[info] application - GET /api/zookeeper/info?path=/ => 200 [362.7 ms]
[info] application - GET /api/config => 200 [1.7 ms]
[info] application - created actor com.github.ldaniels528.trifecta.ui.actors.SSEClientHandlingActor
[info] application - Registering new SSE session '0b3d8c93d9794facae868dd0c206eb3e'...
[info] application - GET /api/sse/connect => 200 [2.7 ms]
2017-02-24 16:10:42 INFO SimpleConsumer:78 - Reconnect due to error:
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:85)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:111)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$45.apply(KafkaMicroConsumer.scala:691)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$45.apply(KafkaMicroConsumer.scala:692)
at scala.util.Try$.apply(Try.scala:192)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:689)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:688)
at com.github.ldaniels528.commons.helpers.ResourceHelper$AutoClose$.use$extension(ResourceHelper.scala:16)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata(KafkaMicroConsumer.scala:688)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:583)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:582)
at scala.Option.map(Option.scala:146)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.getTopicList(KafkaMicroConsumer.scala:582)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaPlayRestFacade.getTopicSummaries(KafkaPlayRestFacade.scala:486)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaController$$anonfun$getTopicSummaries$1$$anonfun$apply$35.apply(KafkaController.scala:113)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaController$$anonfun$getTopicSummaries$1$$anonfun$apply$35.apply(KafkaController.scala:113)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$VxKafkaException: Error communicating with Broker [null:-1] Reason: null
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:697)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:688)
at com.github.ldaniels528.commons.helpers.ResourceHelper$AutoClose$.use$extension(ResourceHelper.scala:16)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata(KafkaMicroConsumer.scala:688)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:583)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:582)
at scala.Option.map(Option.scala:146)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.getTopicList(KafkaMicroConsumer.scala:582)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaPlayRestFacade.getTopicSummaries(KafkaPlayRestFacade.scala:486)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaController$$anonfun$getTopicSummaries$1$$anonfun$apply$35.apply(KafkaController.scala:113)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaController$$anonfun$getTopicSummaries$1$$anonfun$apply$35.apply(KafkaController.scala:113)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:111)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$45.apply(KafkaMicroConsumer.scala:691)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$45.apply(KafkaMicroConsumer.scala:692)
at scala.util.Try$.apply(Try.scala:192)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:689)
... 18 more
[info] application - GET /api/topics/details => 500 [92.4 ms]
[info] application - GET /api/consumers => 200 [91.6 ms]
[info] application - GET /api/brokers/grouped => 200 [284.8 ms]

trifecta.web.port setting is not working

I have set the following setting in $HOME/.trifecta/config.properties and restart the process

trifecta.web.host = localhost
trifecta.web.port = 8888

However the web server still listen to 9000?

Thanks!

[Feature Request] Avro schema registry support (Confluent)

Hi Lawrence, I like what you've done with trifecta. Especially the functionality to dip into and query topics. Would you consider adding support for querying the confluent schema registry to retrieve the avro schema associated with each Kafka messages? (http://www.confluent.io/product). I suspect that a configuration item in config.properties for the repository address would suffice - if the schema repository address is set then always use the schema repo to retrieve the avro schema to decode avro messages.

Query window default should enclose topic names in quotes when the name contains non-alphanumeric characters

If the topic name contains non-alphanumeric characters (such as dash) the query window generates an error:

play.api.UnexpectedException: Unexpected exception[IllegalArgumentException: Illegal argument at "ct * from topic" near "select * from topic" (position 19)]
	at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:250) ~[com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]
	at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:180) ~[com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]
	at play.api.GlobalSettings$class.onError(GlobalSettings.scala:179) [com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]
	at play.api.mvc.WithFilters.onError(Filters.scala:93) [com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]
	at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:94) [com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]
	at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:273) [com.typesafe.play.play-netty-server_2.11-2.4.0.jar:2.4.0]
	at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:269) [com.typesafe.play.play-netty-server_2.11-2.4.0.jar:2.4.0]
	at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346) [org.scala-lang.scala-library-2.11.8.jar:na]
	at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345) [org.scala-lang.scala-library-2.11.8.jar:na]
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [org.scala-lang.scala-library-2.11.8.jar:na]
Caused by: java.lang.IllegalArgumentException: Illegal argument at "ct * from topic" near "select * from topic" (position 19)
	at com.github.ldaniels528.trifecta.messages.query.parser.KafkaQueryTokenizer.parse(KafkaQueryTokenizer.scala:31) ~[com.github.ldaniels528.trifecta-core-0.22.0rc8b.jar:0.22.0rc8b]
	at com.github.ldaniels528.trifecta.messages.query.parser.KafkaQueryTokenizer$.parse(KafkaQueryTokenizer.scala:130) ~[com.github.ldaniels528.trifecta-core-0.22.0rc8b.jar:0.22.0rc8b]
	at com.github.ldaniels528.trifecta.messages.query.parser.KafkaQueryParser$.apply(KafkaQueryParser.scala:24) ~[com.github.ldaniels528.trifecta-core-0.22.0rc8b.jar:0.22.0rc8b]
	at com.github.ldaniels528.trifecta.ui.controllers.KafkaPlayRestFacade.executeQuery(KafkaPlayRestFacade.scala:93) ~[com.github.ldaniels528.trifecta-ui-0.22.0rc8b-0.10.1.0.jar:0.22.0rc8b-0.10.1.0]
	at com.github.ldaniels528.trifecta.ui.controllers.QueryController$$anonfun$executeQuery$1.apply(QueryController.scala:21) ~[com.github.ldaniels528.trifecta-ui-0.22.0rc8b-0.10.1.0.jar:0.22.0rc8b-0.10.1.0]
	at com.github.ldaniels528.trifecta.ui.controllers.QueryController$$anonfun$executeQuery$1.apply(QueryController.scala:18) ~[com.github.ldaniels528.trifecta-ui-0.22.0rc8b-0.10.1.0.jar:0.22.0rc8b-0.10.1.0]
	at play.api.mvc.Action$.invokeBlock(Action.scala:533) ~[com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]
	at play.api.mvc.Action$.invokeBlock(Action.scala:530) ~[com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]
	at play.api.mvc.ActionBuilder$$anon$1.apply(Action.scala:493) ~[com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]
	at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:105) ~[com.typesafe.play.play_2.11-2.4.0.jar:2.4.0]

To resolve this, the topic name must be enclosed in quotes.

This is not immediately obvious. Right now by default the query window is populated with an example query as follows:
select * from topic-name with default

which will not work.

Enclose the topic name in quotes when the topic name contains non-alphanumeric characters
select * from "topic-name" with default

It would also be useful to mention this caveat in the Hints & Tips panel.

Errors encountered while loading the project

Hi,

I tried to load the project in my IntelliJ. I have imported the project as SBT project and encountered the following errors.

[error] (trifecta_cassandra/*:ssExtractDependencies) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found
[error] (trifecta_mongodb/*:ssExtractDependencies) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found
[error] (trifecta_etl/*:ssExtractDependencies) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found
[error] (trifecta_azure/*:ssExtractDependencies) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found
[error] (trifecta_elasticsearch/*:ssExtractDependencies) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found
[error] (trifecta_cassandra/*:update) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found
[error] (trifecta_mongodb/*:update) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found
[error] (trifecta_azure/*:update) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found
[error] (trifecta_elasticsearch/*:update) sbt.ResolveException: unresolved dependency: com.github.ldaniels528#commons-helpers_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#tabular_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-core_2.11;0.22.0rc8b: not found
[error] unresolved dependency: com.github.ldaniels528#trifecta-modules-core_2.11;0.22.0rc8b: not found

How can I fix this? Thanks in advance.

Ignore broken topics

Hi,

trifecta finds my Kafka brokers without a problem, but it can't find the topics or consumers.

I can only see this error in the console:
Resource '/rest/getTopicSummaries' failed unexpectedly WARNING arguments left: 1

(Btw. the arguments left part means the Akka logging was used in the wrong order.).

trifecta-ui-0.19.0 can't connect to Kafka brokers

I'm trying to use new trifecta-ui, but it doesn't work.

It seems to be able to connect to ZooKeeper, but fails to connect Kafka brokers. From console log, /api/topics/details returns 500.

15422 [ForkJoinPool-2-worker-5] INFO application - GET /api/topics/details ~> 500 [2316.9 ms]

Detailed logs as follows:

$ bin/trifecta_ui
241 [main] WARN application - Logger configuration in conf files is deprecated and has no effect. Use a logback configuration file instead.
1034 [application-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
1043 [main] INFO play.api.libs.concurrent.ActorSystemProvider - Starting application default Akka system: application
1131 [main] INFO application - Application has started
1144 [main] INFO play.api.Play$ - Application started (Prod)
1279 [main] INFO play.core.server.NettyServer$ - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
3581 [application-akka.actor.default-dispatcher-4] INFO application - Registering new SSE session '815078639a1a464895cca9f58658ca0f'...
3617 [application-akka.actor.default-dispatcher-2] WARN application - application.conf @ file:/Users/nobusue/petadata/trifecta/trifecta_ui-0.19.0/conf/application.conf: 8: application.secret is deprecated, use play.crypto.secret instead
3885 [ForkJoinPool-2-worker-7] INFO application - GET /api/sse/connect ~> 200 [358.7 ms]
13107 [application-akka.actor.default-dispatcher-4] INFO application - Loading Trifecta configuration...
13107 [application-akka.actor.default-dispatcher-4] INFO application - Starting Zookeeper client...
13109 [application-akka.actor.default-dispatcher-7] INFO com.github.ldaniels528.trifecta.actors.ReactiveEventsActor - Preparing streaming consumer updates...
13122 [application-akka.actor.default-dispatcher-8] INFO com.github.ldaniels528.trifecta.actors.ReactiveEventsActor - Preparing streaming topic updates...
13122 [application-akka.actor.default-dispatcher-8] INFO akka.actor.DeadLetterActorRef - Message [akka.actor.LightArrayRevolverScheduler$$anon$2] from Actor[akka://application/user/$b#-1990922494] to Actor[akka://application/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
13124 [application-akka.actor.default-dispatcher-3] INFO akka.actor.DeadLetterActorRef - Message [akka.actor.LightArrayRevolverScheduler$$anon$2] from Actor[akka://application/user/$b#-1990922494] to Actor[akka://application/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
13171 [application-akka.actor.default-dispatcher-4] INFO org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
13218 [application-akka.actor.default-dispatcher-3] INFO application - Registering new SSE session '98b7197104754b1290013f30129455c6'...
13222 [ForkJoinPool-2-worker-3] INFO application - GET /api/sse/connect ~> 200 [12.1 ms]
13469 [application-akka.actor.default-dispatcher-4-EventThread] INFO org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED
14907 [ForkJoinPool-2-worker-5] INFO application - GET /api/zookeeper/path?path=/ ~> 200 [1616.2 ms]
14920 [ForkJoinPool-2-worker-5] INFO application - GET /api/zookeeper/info?path=/ ~> 200 [1629.3 ms]
log4j:WARN No appenders could be found for logger (kafka.consumer.SimpleConsumer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
15422 [ForkJoinPool-2-worker-5] INFO application - GET /api/topics/details ~> 500 [2316.9 ms]
28265 [application-akka.actor.default-dispatcher-7] ERROR akka.dispatch.TaskInvocation - Error communicating with Broker [ip-10-0-129-11.us-west-1.compute.internal:9092] Reason: null
com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$VxKafkaException: Error communicating with Broker [ip-10-0-129-11.us-west-1.compute.internal:9092] Reason: null
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:612) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:603) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
at com.github.ldaniels528.commons.helpers.ResourceHelper$AutoClose$.use$extension(ResourceHelper.scala:16) ~[com.github.ldaniels528.commons-helpers_2.11-0.1.2.jar:0.1.2]
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata(KafkaMicroConsumer.scala:603) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:456) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:455) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
at scala.Option.map(Option.scala:146) ~[org.scala-lang.scala-library-2.11.7.jar:na]
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.getTopicList(KafkaMicroConsumer.scala:455) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
at com.github.ldaniels528.trifecta.controllers.KafkaRestFacade.getTopicDeltas(KafkaRestFacade.scala:431) ~[com.github.ldaniels528.trifecta_ui-0.19.0.jar:0.19.0]
at com.github.ldaniels528.trifecta.actors.ReactiveEventsActor$$anonfun$com$github$ldaniels528$trifecta$actors$ReactiveEventsActor$$startStreamingTopicUpdates$1.apply$mcV$sp(ReactiveEventsActor.scala:62) ~[com.github.ldaniels528.trifecta_ui-0.19.0.jar:0.19.0]
at akka.actor.Scheduler$$anon$5.run(Scheduler.scala:79) ~[com.typesafe.akka.akka-actor_2.11-2.3.14.jar:na]
at akka.actor.LightArrayRevolverScheduler$$anon$2$$anon$1.run(Scheduler.scala:242) ~[com.typesafe.akka.akka-actor_2.11-2.3.14.jar:na]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) ~[com.typesafe.akka.akka-actor_2.11-2.3.14.jar:na]
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397) [com.typesafe.akka.akka-actor_2.11-2.3.14.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [org.scala-lang.scala-library-2.11.7.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [org.scala-lang.scala-library-2.11.7.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [org.scala-lang.scala-library-2.11.7.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [org.scala-lang.scala-library-2.11.7.jar:na]
Caused by: java.nio.channels.ClosedChannelException: null
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[org.apache.kafka.kafka_2.11-0.9.0.0.jar:na]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[org.apache.kafka.kafka_2.11-0.9.0.0.jar:na]
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[org.apache.kafka.kafka_2.11-0.9.0.0.jar:na]
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:111) ~[org.apache.kafka.kafka_2.11-0.9.0.0.jar:na]
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$17.apply(KafkaMicroConsumer.scala:606) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$17.apply(KafkaMicroConsumer.scala:607) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
at scala.util.Try$.apply(Try.scala:192) ~[org.scala-lang.scala-library-2.11.7.jar:na]
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:604) ~[com.github.ldaniels528.trifecta_cli-0.19.0.jar:0.19.0]
... 17 common frames omitted

BDQL should support uppercase verbs

This BDQL query works:

select field1, field2 from "topic:foobar" with default where bar == "quux" limit 5

The following ones doesn't and fails with "Unexpected end of statement":

SELECT field1, field2 FROM "topic:foobar" WITH default WHERE bar == "quux" LIMIT 5

Message count does not consider offset 0 (v0.21.2)

The message count ignores offset 0, leading to 1 less per partition and in total p less, where p is the number of partitions.

E.g. if you publish 100K messages to a single topic with 2 partitions, a possible result will be:

99,998 total messages

  • In partition 0: offsets between 0 and 50264 with total of 50,264 messages
  • In partition 1: offsets between 0 and 49734 with total of 49,734 messages

Whereas expected result should be:

100,000 total messages

  • In partition 0: offsets between 0 and 50264 with total of 50,265 messages
  • In partition 1: offsets between 0 and 49734 with total of 49,735 messages

I found this issue in the following UI sections:

  • Inspect-->Leaders
  • Inspect-->Replicas
  • Observe-->Message Topics
  • Observe-->Topic Offsets

Can't see consumers

This may well be pilot error on my part...

I start my kafka server (0.9.0.1) then trifecta cli (I built but couldn't run the web client--Publishing these trifecta flavors in a Docker image would be very cool! :-))

I run some tests that create, write to, and read from a topic.

In trifecta I do this:

kbrokers -- I can see my broker w/jmx port
kls -- I can see two topics: mine (lowercaseStrings), plus __consumer_offsets
kstats lowercaseStrings -- I see 4 partitions I expect with reasonable/expected values for offsets

but if I do:

kconsumers -- I get 'No data returned'

How can I see my consumers? I'd like to know if where my consumers' offsets are set.
I did add this to my config file in ~/.trifecta: trifecta.storm.kafka.consumers.partitionManager = false

authentication

Hello
Maybe you can add basic authentication or user_name to application.conf?
thans

Topic message counts off by 1 per partition

I view a 9-partition topic in Trifecta and it says I have 91 messages. I consume those messages and publish to another database. I query the sink (other database) and it has 100 records. It appears your counts in Trifecta are off by 1 message per partition in the offsets and message counts. v0.21 (latest)

This is evident as well when checking that issue #26 you'll see your offset counts are 1 record different than the CLI tool from Kafka's counts when I compared the consumer offsets.

Cannot connect to brokers from remote server

Hi, I cannot see the topics from the broker, the kafka and zookeeper were initialized from SPAIN, I am connecting from my country in PH. The problem is encountered here in PH, I am already using the NAT IP in the trifecta configuration(config.properties).

2018-04-04 00:48:42 INFO SimpleConsumer:76 - Reconnect due to error:
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:85)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:111)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$45.apply(KafkaMicroConsumer.sca
la:691)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$45.apply(KafkaMicroConsumer.sca
la:692)
at scala.util.Try$.apply(Try.scala:192)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:689)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:688)
at com.github.ldaniels528.commons.helpers.ResourceHelper$AutoClose$.use$extension(ResourceHelper.scala:16)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata(KafkaMicroConsumer.scala:688)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:583)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:582)
at scala.Option.map(Option.scala:146)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.getTopicList(KafkaMicroConsumer.scala:582)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaPlayRestFacade.getTopicSummaries(KafkaPlayRestFacade.scala:486)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaController$$anonfun$getTopicSummaries$1$$anonfun$apply$35.apply(KafkaController.scala:113)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaController$$anonfun$getTopicSummaries$1$$anonfun$apply$35.apply(KafkaController.scala:113)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$VxKafkaException: Error communicating with Broker [10.71.86.158:9093] Reason: null
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:697)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:688)
at com.github.ldaniels528.commons.helpers.ResourceHelper$AutoClose$.use$extension(ResourceHelper.scala:16)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata(KafkaMicroConsumer.scala:688)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:583)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$getTopicList$1.apply(KafkaMicroConsumer.scala:582)
at scala.Option.map(Option.scala:146)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$.getTopicList(KafkaMicroConsumer.scala:582)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaPlayRestFacade.getTopicSummaries(KafkaPlayRestFacade.scala:486)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaController$$anonfun$getTopicSummaries$1$$anonfun$apply$35.apply(KafkaController.scala:113)
at com.github.ldaniels528.trifecta.ui.controllers.KafkaController$$anonfun$getTopicSummaries$1$$anonfun$apply$35.apply(KafkaController.scala:113)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)
at kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:111)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$45.apply(KafkaMicroConsumer.sca
la:691)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1$$anonfun$45.apply(KafkaMicroConsumer.sca
la:692)
at scala.util.Try$.apply(Try.scala:192)
at com.github.ldaniels528.trifecta.io.kafka.KafkaMicroConsumer$$anonfun$com$github$ldaniels528$trifecta$io$kafka$KafkaMicroConsumer$$getTopicMetadata$1.apply(KafkaMicroConsumer.scala:689)
... 18 more
[info] application - GET /api/topics/details => 500 [42300.4 ms]
[info] application - GET /api/zookeeper/path?path=/ => 200 [237.1 ms]
[info] application - GET /api/zookeeper/info?path=/ => 200 [692.8 ms]
[info] application - GET /api/zookeeper/path?path=/ => 200 [242.3 ms]
[info] application - GET /api/zookeeper/info?path=/ => 200 [692.6 ms]

Support for partition and offset in KQL queries

Trifecta v0.22.0rc8b (0.9.0.1)

It would be nice to be able to query by partition and offset (the auto generated fields added to each query row) in KQL. As I understand, currently if you specify these fields in the query, it searches for the fields with such name in the message.

unable to inspect brokers

whats the correct config for config.properties to connect to brokers? seems that even though the data is in the id, that it doesn't show up in the ui.

zk:/> kbrokers
Runtime error: KeeperErrorCode = NoNode for /brokers/ids

zk:/> ztree /kafka/brokers/ids

  • ----------------------------------------------- +
    | path creationTime |
  • ----------------------------------------------- +
    | /kafka/brokers/ids 01/18/15 08:52:53 CST |
    | /kafka/brokers/ids/642 02/17/15 05:39:43 CST |
    | /kafka/brokers/ids/643 02/17/15 05:39:43 CST |
    | /kafka/brokers/ids/641 02/17/15 05:39:44 CST |
  • ----------------------------------------------- +
    zk:/>

{
"jmx_port": -1,
"timestamp": "1424216383325",
"host": "hostname.of.real.host",
"version": 1,
"port": 9092
}

UI: show latest available message in a Kafka topic

It would be nice to show the latest available message in a Kafka topic by default when you click on a topic under the "Inspect" tab. Think: "tail -1 ". Right now I think it picks the latest offset of one of the partitions (at random? always the first partition?), but that message might not be the latest one.

I don't know the internals of trifecta enough to comment on how easy or difficult the implementation of this feature would be. Right now, we are using different tools to "tail -F" a Kafka topic (= continuous tail), and it would be nice to at least have "tail -1" functionality (combined with refreshing/rerunning that "tail -1" in the UI) to achieve a similar effect.

[kafka-avro]no content to map due to end-of-input at source line 1 column 1

Hi,
I'm getting the below error on the Trifecta UI:
no content to map due to end-of-input at source line 1 column 1

What I've done:
1> Pasted the value.avsc in the path -> .trifects/decoders/value (where "value" is my topic)
2> Started the Trifecta UI
3> When I navigated to "Observer" tab, I see an error (just an red flash message) telling me - no
content to map due to end-of-input at source line 1 column 1

4> Then I've clicked on the topic "value" and seeing that my entire message is being displayed as
"null" (before pasting the value.avsc, at least I can see the entire message in a messy format)

Please let me know if I'm missing anything?

And I do not the see the way that we can query the topic by using the "key". Like I've key and value in my "value" topic. And I've two schemas for both of them. How can I place both the schemas in the decoder subfolder?

Thanks!

Inspect and Decoders tab sort topics differently

The "Inspect" tab sorts topic names lexicographically (lowercase before uppercase, e.g. "a" before "b" before "A" before "B"). The "Decoders" tab sorts differently, putting uppercase before lowercase ("A" before "B" before "a" before "b").

I think the two listings should be consistent, and personally I'd prefer to sort lowercase before uppercase = current behavior of "Inspect" tab.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.