Giter Club home page Giter Club logo

kafdrop's Introduction

logo Kafdrop – Kafka Web UI   Tweet

Price Release with mvn Docker Language grade: Java

Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups. The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages.

Overview Screenshot

This project is a reboot of Kafdrop 2.x, dragged kicking and screaming into the world of Java 17+, Kafka 2.x, Helm and Kubernetes. It's a lightweight application that runs on Spring Boot and is dead-easy to configure, supporting SASL and TLS-secured brokers.

Features

  • View Kafka brokers — topic and partition assignments, and controller status
  • View topics — partition count, replication status, and custom configuration
  • Browse messages — JSON, plain text, Avro and Protobuf encoding
  • View consumer groups — per-partition parked offsets, combined and per-partition lag
  • Create new topics
  • View ACLs
  • Support for Azure Event Hubs

Requirements

  • Java 17 or newer
  • Kafka (version 0.11.0 or newer) or Azure Event Hubs

Optional, additional integration:

  • Schema Registry

Getting Started

You can run the Kafdrop JAR directly, via Docker, or in Kubernetes.

Running from JAR

java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
    -jar target/kafdrop-<version>.jar \
    --kafka.brokerConnect=<host:port,host:port>,...

If unspecified, kafka.brokerConnect defaults to localhost:9092.

Note: As of Kafdrop 3.10.0, a ZooKeeper connection is no longer required. All necessary cluster information is retrieved via the Kafka admin API.

Open a browser and navigate to http://localhost:9000. The port can be overridden by adding the following config:

--server.port=<port> --management.server.port=<port>

Optionally, configure a schema registry connection with:

--schemaregistry.connect=http://localhost:8081

and if you also require basic auth for your schema registry connection you should add:

--schemaregistry.auth=username:password

Finally, a default message and key format (e.g. to deserialize Avro messages or keys) can optionally be configured as follows:

--message.format=AVRO
--message.keyFormat=DEFAULT

Valid format values are DEFAULT, AVRO, PROTOBUF. This can also be configured at the topic level via dropdown when viewing messages. If key format is unspecified, message format will be used for key too.

Configure Protobuf message type

Option 1: Using Protobuf Descriptor

In case of protobuf message type, the definition of a message could be compiled and transmitted using a descriptor file. Thus, in order for kafdrop to recognize the message, the application will need to access to the descriptor file(s). Kafdrop will allow user to select descriptor and well as specifying name of one of the message type provided by the descriptor at runtime.

To configure a folder with protobuf descriptor file(s) (.desc), follow:

--protobufdesc.directory=/var/protobuf_desc

Option 2 : Using Schema Registry

In case of no protobuf descriptor file being supplied the implementation will attempt to create the protobuf deserializer using the schema registry instead.

Defaulting to Protobuf

If preferred the message type could be set to default as follows:

--message.format=PROTOBUF

Running with Docker

Images are hosted at hub.docker.com/r/obsidiandynamics/kafdrop.

Launch container in background:

docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e SERVER_SERVLET_CONTEXTPATH="/" \
    obsidiandynamics/kafdrop

Launch container with some specific JVM options:

docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e JVM_OPTS="-Xms32M -Xmx64M" \
    -e SERVER_SERVLET_CONTEXTPATH="/" \
    obsidiandynamics/kafdrop

Launch container in background with protobuff definitions:

docker run -d --rm -v <path_to_protobuff_descriptor_files>:/var/protobuf_desc -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e SERVER_SERVLET_CONTEXTPATH="/" \
    -e CMD_ARGS="--message.format=PROTOBUF --protobufdesc.directory=/var/protobuf_desc" \
    obsidiandynamics/kafdrop

Then access the web UI at http://localhost:9000.

Hey there! We hope you really like Kafdrop! Please take a moment to the repo or Tweet about it.

Running in Kubernetes (using a Helm Chart)

Clone the repository (if necessary):

git clone https://github.com/obsidiandynamics/kafdrop && cd kafdrop

Apply the chart:

helm upgrade -i kafdrop chart --set image.tag=3.x.x \
    --set kafka.brokerConnect=<host:port,host:port> \
    --set server.servlet.contextPath="/" \
    --set cmdArgs="--message.format=AVRO --schemaregistry.connect=http://localhost:8080" \ #optional
    --set jvm.opts="-Xms32M -Xmx64M"

For all Helm configuration options, have a peek into chart/values.yaml.

Replace 3.x.x with the image tag of obsidiandynamics/kafdrop. Services will be bound on port 9000 by default (node port 30900).

Note: The context path must begin with a slash.

Proxy to the Kubernetes cluster:

kubectl proxy

Navigate to http://localhost:8001/api/v1/namespaces/default/services/http:kafdrop:9000/proxy.

Protobuf support via helm chart:

To install with protobuf support, a "facility" option is provided for the deployment, to mount the descriptor files folder, as well as passing the required CMD arguments, via option mountProtoDesc. Example:

helm upgrade -i kafdrop chart --set image.tag=3.x.x \
    --set kafka.brokerConnect=<host:port,host:port> \
    --set server.servlet.contextPath="/" \
    --set mountProtoDesc.enabled=true \
    --set mountProtoDesc.hostPath="<path/to/desc/folder>" \
    --set jvm.opts="-Xms32M -Xmx64M"

Building

After cloning the repository, building is just a matter of running a standard Maven build:

$ mvn clean package

The following command will generate a Docker image:

mvn assembly:single docker:build

Docker Compose

There is a docker-compose.yaml file that bundles a Kafka/ZooKeeper instance with Kafdrop:

cd docker-compose/kafka-kafdrop
docker-compose up

APIs

JSON endpoints

Starting with version 2.0.0, Kafdrop offers a set of Kafka APIs that mirror the existing HTML views. Any existing endpoint can be returned as JSON by simply setting the Accept: application/json header. Some endpoints are JSON only:

  • /topic: Returns a list of all topics.

OpenAPI Specification (OAS)

To help document the Kafka APIs, OpenAPI Specification (OAS) has been included. The OpenAPI Specification output is available by default at the following Kafdrop URL:

/v3/api-docs

It is also possible to access the Swagger UI (the HTML views) from the following URL:

/swagger-ui.html

This can be overridden with the following configuration:

springdoc.api-docs.path=/new/oas/path

You can disable OpenAPI Specification output with the following configuration:

springdoc.api-docs.enabled=false

CORS Headers

Starting in version 2.0.0, Kafdrop sets CORS headers for all endpoints. You can control the CORS header values with the following configurations:

cors.allowOrigins (default is *)
cors.allowMethods (default is GET,POST,PUT,DELETE)
cors.maxAge (default is 3600)
cors.allowCredentials (default is true)
cors.allowHeaders (default is Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization)

You can also disable CORS entirely with the following configuration:

cors.enabled=false

Topic Configuration

By default, you could delete a topic. If you don't want this feature, you could disable it with:

--topic.deleteEnabled=false

By default, you could create a topic. If you don't want this feature, you could disable it with:

--topic.createEnabled=false

Actuator

Health and info endpoints are available at the following path: /actuator

This can be overridden with the following configuration:

management.endpoints.web.base-path=<path>

Guides

Connecting to a Secure Broker

Kafdrop supports TLS (SSL) and SASL connections for encryption and authentication. This can be configured by providing a combination of the following files (placed into the Kafka root directory):

  • kafka.truststore.jks: specifying the certificate for authenticating brokers, if TLS is enabled.
  • kafka.keystore.jks: specifying the private key to authenticate the client to the broker, if mutual TLS authentication is required.
  • kafka.properties: specifying the necessary configuration, including key/truststore passwords, cipher suites, enabled TLS protocol versions, username/password pairs, etc. When supplying the truststore and/or keystore files, the ssl.truststore.location and ssl.keystore.location properties will be assigned automatically.

Using Docker

The three files above can be supplied to a Docker instance in base-64-encoded form via environment variables:

docker run -d --rm -p 9000:9000 \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e KAFKA_PROPERTIES="$(cat kafka.properties | base64)" \
    -e KAFKA_TRUSTSTORE="$(cat kafka.truststore.jks | base64)" \   # optional
    -e KAFKA_KEYSTORE="$(cat kafka.keystore.jks | base64)" \       # optional
    obsidiandynamics/kafdrop

Rather than passing KAFKA_PROPERTIES as a base64-encoded string, you can also place a pre-populated KAFKA_PROPERTIES_FILE into the container:

cat << EOF > kafka.properties
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="foo" password="bar"
EOF

docker run -d --rm -p 9000:9000 \
    -v $(pwd)/kafka.properties:/tmp/kafka.properties:ro \
    -v $(pwd)/kafka.truststore.jks:/tmp/kafka.truststore.jks:ro \
    -v $(pwd)/kafka.keystore.jks:/tmp/kafka.keystore.jks:ro \
    -e KAFKA_BROKERCONNECT=<host:port,host:port> \
    -e KAFKA_PROPERTIES_FILE=/tmp/kafka.properties \
    -e KAFKA_TRUSTSTORE_FILE=/tmp/kafka.truststore.jks \   # optional
    -e KAFKA_KEYSTORE_FILE=/tmp/kafka.keystore.jks \       # optional
    obsidiandynamics/kafdrop

Environment Variables

Basic configuration
Name Description
KAFKA_BROKERCONNECT Bootstrap list of Kafka host/port pairs. Defaults to localhost:9092.
KAFKA_PROPERTIES Additional properties to configure the broker connection (base-64 encoded).
KAFKA_TRUSTSTORE Certificate for broker authentication (base-64 encoded). Required for TLS/SSL.
KAFKA_KEYSTORE Private key for mutual TLS authentication (base-64 encoded).
SERVER_SERVLET_CONTEXTPATH The context path to serve requests on (must end with a /). Defaults to /.
SERVER_PORT The web server port to listen on. Defaults to 9000.
MANAGEMENT_SERVER_PORT The Spring Actuator server port to listen on. Defaults to 9000.
SCHEMAREGISTRY_CONNECT The endpoint of Schema Registry for Avro or Protobuf message
SCHEMAREGISTRY_AUTH Optional basic auth credentials in the form username:password.
CMD_ARGS Command line arguments to Kafdrop, e.g. --message.format or --protobufdesc.directory or --server.port.
Advanced configuration
Name Description
JVM_OPTS JVM options. E.g.JVM_OPTS: "-Xms16M -Xmx64M -Xss360K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
JMX_PORT Port to use for JMX. No default; if unspecified, JMX will not be exposed.
HOST The hostname to report for the RMI registry (used for JMX). Defaults to localhost.
KAFKA_PROPERTIES_FILE Internal location where the Kafka properties file will be written to (if KAFKA_PROPERTIES is set). Defaults to kafka.properties.
KAFKA_TRUSTSTORE_FILE Internal location where the truststore file will be written to (if KAFKA_TRUSTSTORE is set). Defaults to kafka.truststore.jks.
KAFKA_KEYSTORE_FILE Internal location where the keystore file will be written to (if KAFKA_KEYSTORE is set). Defaults to kafka.keystore.jks.
SSL_ENABLED Enabling HTTPS (SSL) for Kafdrop server. Default is false
SSL_KEY_STORE_TYPE Type of SSL keystore. Default is PKCS12
SSL_KEY_STORE Path to keystore file
SSL_KEY_STORE_PASSWORD Keystore password
SSL_KEY_ALIAS Key alias

Using Helm

Like in the Docker example, supply the files in base-64 form:

helm upgrade -i kafdrop chart --set image.tag=3.x.x \
    --set kafka.brokerConnect=<host:port,host:port> \
    --set kafka.properties="$(cat kafka.properties | base64)" \
    --set kafka.truststore="$(cat kafka.truststore.jks | base64)" \
    --set kafka.keystore="$(cat kafka.keystore.jks | base64)"

Updating the Bootstrap theme

Edit the .scss files in the theme directory, then run theme/install.sh. This will overwrite src/main/resources/static/css/bootstrap.min.css. Then build as usual. (Requires npm.)

Securing the Kafdrop UI

Kafdrop doesn't (yet) natively implement an authentication mechanism to restrict user access. Here's a quick workaround using NGINX using Basic Auth. The instructions below are for macOS and Homebrew.

Requirements

  • NGINX: install using which nginx > /dev/null || brew install nginx
  • Apache HTTP utilities: which htpasswd > /dev/null || brew install httpd

Setup

Set the admin password (you will be prompted):

htpasswd -c /usr/local/etc/nginx/.htpasswd admin

Add a logout page in /usr/local/opt/nginx/html/401.html:

<!DOCTYPE html>
<p>Not authorized. <a href="<!--# echo var="scheme" -->://<!--# echo var="http_host" -->/">Login</a>.</p>

Use the following snippet for /usr/local/etc/nginx/nginx.conf:

worker_processes 4;
  
events {
  worker_connections 1024;
}

http {
  upstream kafdrop {
    server 127.0.0.1:9000;
    keepalive 64;
  }

  server {
    listen *:8080;
    server_name _;
    access_log /usr/local/var/log/nginx/nginx.access.log;
    error_log /usr/local/var/log/nginx/nginx.error.log;
    auth_basic "Restricted Area";
    auth_basic_user_file /usr/local/etc/nginx/.htpasswd;

    location / {
      proxy_pass http://kafdrop;
    }

    location /logout {
      return 401;
    }

    error_page 401 /errors/401.html;

    location /errors {
      auth_basic off;
      ssi        on;
      alias /usr/local/opt/nginx/html;
    }
  }
}

Run NGINX:

nginx

Or reload its configuration if already running:

nginx -s reload

To logout, browse to /logout.

Hey there! We hope you really like Kafdrop! Please take a moment to the repo or Tweet about it.

Contributing Guidelines

See here.

Release workflow

To cut an official release, these are the steps:

  1. Commit a new version on master that has the -SNAPSHOT suffix stripped (see pom.xml). Once the commit is merged, the CI will treat it as a release build, and will end up publishing more artifacts than the regular (non-release/snapshot) build. One of those will be a dockerhub push to the specific version and "latest" tags. (The regular build doesn't update "latest").

  2. You can then edit the release description in GitHub to describe what went into the release.

  3. After the release goes through successfully, you need to prepare the repo for the next version, which requires committing the next snapshot version on master again. So we should increment the minor version and add again the -SNAPSHOT suffix.

kafdrop's People

Contributors

adefossez-zenika avatar adityajalkhare avatar andytson avatar bert-r avatar chetanmeh avatar conorr avatar davideicardi avatar dependabot[bot] avatar dhayha avatar dkirrane avatar ekoutanov avatar fape avatar gebinic avatar janderssonse avatar jcoquerygithub avatar jonathan-mothership avatar kchibui avatar lcary avatar mcs avatar mehdihasan avatar nicklester avatar nsteps avatar randr97 avatar satendra-sahu avatar shaaimin avatar simsibimsiwimsi avatar stijnvanbever avatar tunctaylan avatar vojkog avatar zihengcat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafdrop's Issues

Running behind a reverse proxy

I get the below error when I run behind a reverse proxy /kafdrop (Istio in my case).

I tried set servlet.contextPath: /kafdrop in values.yml. But I get:

2019-11-08 17:01:04.349 �[32m INFO�[m �[35m15�[m [  XNIO-1 task-1] �[36mo.s.w.s.FrameworkServlet                �[m : Completed initialization in 8 ms
errorAtts: {timestamp=Fri Nov 08 17:01:27 GMT 2019, status=404, error=Not Found, message=Not Found, path=/kafdrop}
17:01:27/0     ERROR [XNIO-1 task-11]: Error executing FreeMarker template
FreeMarker template error:
The following has evaluated to null or missing:
==> error.trace  [in template "error.ftl" at line 10, column 3]

----
Tip: It's the step after the last dot that caused this error, not those before it.
----
Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)??
----

----
FTL stack trace ("~" means nesting-related):
	- Failed at: ${error.trace}  [in template "error.ftl" at line 10, column 1]
----

Java stack trace (for programmers):
----
freemarker.core.InvalidReferenceException: [... Exception message was already printed; see it above ...]
	at freemarker.core.InvalidReferenceException.getInstance(InvalidReferenceException.java:134)
	at freemarker.core.EvalUtil.coerceModelToTextualCommon(EvalUtil.java:467)
	at freemarker.core.EvalUtil.coerceModelToStringOrMarkup(EvalUtil.java:389)
	at freemarker.core.EvalUtil.coerceModelToStringOrMarkup(EvalUtil.java:358)
	at freemarker.core.DollarVariable.calculateInterpolatedStringOrMarkup(DollarVariable.java:100)
	at freemarker.core.DollarVariable.accept(DollarVariable.java:63)
	at freemarker.core.Environment.visit(Environment.java:330)
	at freemarker.core.Environment.visit(Environment.java:336)
	at freemarker.core.Environment.process(Environment.java:309)
	at freemarker.template.Template.process(Template.java:384)
	at org.springframework.web.servlet.view.freemarker.FreeMarkerView.processTemplate(FreeMarkerView.java:389)
	at org.springframework.web.servlet.view.freemarker.FreeMarkerView.doRender(FreeMarkerView.java:302)
	at org.springframework.web.servlet.view.freemarker.FreeMarkerView.renderMergedTemplateModel(FreeMarkerView.java:253)
	at org.springframework.web.servlet.view.AbstractTemplateView.renderMergedOutputModel(AbstractTemplateView.java:178)
	at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:316)
	at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1371)
	at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1117)
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1056)
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:645)
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
	at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
	at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:81)
	at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
	at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
	at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
	at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
	at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:251)
	at io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:186)
	at io.undertow.servlet.spec.RequestDispatcherImpl.error(RequestDispatcherImpl.java:501)
	at io.undertow.servlet.spec.RequestDispatcherImpl.error(RequestDispatcherImpl.java:419)
	at io.undertow.servlet.spec.HttpServletResponseImpl.doErrorDispatch(HttpServletResponseImpl.java:196)
	at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:276)
	at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:78)
	at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:133)
	at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:130)
	at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
	at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
	at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:249)
	at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:78)
	at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:99)
	at io.undertow.server.Connectors.executeRootHandler(Connectors.java:376)
	at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:835)
2019-11-08 17:01:27.631 �[31mERROR�[m �[35m15�[m [ XNIO-1 task-11] �[36mi.u.s.a.LoggingExceptionHandler         �[m : UT005023: Exception handling request to /error

java.lang.RuntimeException: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is freemarker.core.InvalidReferenceException: The following has evaluated to null or missing:
==> error.trace  [in template "error.ftl" at line 10, column 3]

----
Tip: It's the step after the last dot that caused this error, not those before it.
----
Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)??
----

----
FTL stack trace ("~" means nesting-related):
	- Failed at: ${error.trace}  [in template "error.ftl" at line 10, column 1]
----
	at io.undertow.servlet.spec.HttpServletResponseImpl.doErrorDispatch(HttpServletResponseImpl.java:198)
	at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:276)
	at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:78)
	at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:133)
	at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:130)
	at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
	at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
	at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:249)
	at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:78)
	at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:99)
	at io.undertow.server.Connectors.executeRootHandler(Connectors.java:376)
	at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:835)
Caused by: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is freemarker.core.InvalidReferenceException: The following has evaluated to null or missing:
==> error.trace  [in template "error.ftl" at line 10, column 3]

----
Tip: It's the step after the last dot that caused this error, not those before it.
----
Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)??
----

----
FTL stack trace ("~" means nesting-related):
	- Failed at: ${error.trace}  [in template "error.ftl" at line 10, column 1]
----
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1013)
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:645)
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
	at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
	at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:81)
	at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
	at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
	at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
	at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
	at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
	at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:251)
	at io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:186)
	at io.undertow.servlet.spec.RequestDispatcherImpl.error(RequestDispatcherImpl.java:501)
	at io.undertow.servlet.spec.RequestDispatcherImpl.error(RequestDispatcherImpl.java:419)
	at io.undertow.servlet.spec.HttpServletResponseImpl.doErrorDispatch(HttpServletResponseImpl.java:196)
	... 14 more
Caused by: freemarker.core.InvalidReferenceException: The following has evaluated to null or missing:
==> error.trace  [in template "error.ftl" at line 10, column 3]

----
Tip: It's the step after the last dot that caused this error, not those before it.
----
Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)??
----

----
FTL stack trace ("~" means nesting-related):
	- Failed at: ${error.trace}  [in template "error.ftl" at line 10, column 1]
----
	at freemarker.core.InvalidReferenceException.getInstance(InvalidReferenceException.java:134)
	at freemarker.core.EvalUtil.coerceModelToTextualCommon(EvalUtil.java:467)
	at freemarker.core.EvalUtil.coerceModelToStringOrMarkup(EvalUtil.java:389)
	at freemarker.core.EvalUtil.coerceModelToStringOrMarkup(EvalUtil.java:358)
	at freemarker.core.DollarVariable.calculateInterpolatedStringOrMarkup(DollarVariable.java:100)
	at freemarker.core.DollarVariable.accept(DollarVariable.java:63)
	at freemarker.core.Environment.visit(Environment.java:330)
	at freemarker.core.Environment.visit(Environment.java:336)
	at freemarker.core.Environment.process(Environment.java:309)
	at freemarker.template.Template.process(Template.java:384)
	at org.springframework.web.servlet.view.freemarker.FreeMarkerView.processTemplate(FreeMarkerView.java:389)
	at org.springframework.web.servlet.view.freemarker.FreeMarkerView.doRender(FreeMarkerView.java:302)
	at org.springframework.web.servlet.view.freemarker.FreeMarkerView.renderMergedTemplateModel(FreeMarkerView.java:253)
	at org.springframework.web.servlet.view.AbstractTemplateView.renderMergedOutputModel(AbstractTemplateView.java:178)
	at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:316)
	at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1371)
	at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1117)
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1056)
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)
	... 31 more

Trouble Accessing the dashboard

The Kafdrop Host:9090/ --> always gives me whitelabel error page

  • logs:
2019-08-04 21:44:52.125  WARN 6 [  XNIO-1 task-3] k.c.NetworkClient$DefaultMetadataUpdater : [Consumer clientId=kafdrop-client, groupId=kafdrop-consumer-group] 50 partitions have leader brokers without a matching listener, including [__consumer_offsets-0, __consumer_offsets-10, __consumer_offsets-20, __consumer_offsets-40, __consumer_offsets-30, __consumer_offsets-39, __consumer_offsets-9, __consumer_offsets-11, __consumer_offsets-31, __consumer_offsets-13]
2019-08-04 21:44:52.228  WARN 6 [  XNIO-1 task-3] k.c.NetworkClient$DefaultMetadataUpdater : [Consumer clientId=kafdrop-client, groupId=kafdrop-consumer-group] 50 partitions have leader brokers without a matching listener, including [__consumer_offsets-0, __consumer_offsets-10, __consumer_offsets-20, __consumer_offsets-40, __consumer_offsets-30, __consumer_offsets-39, __consumer_offsets-9, __consumer_offsets-11, __consumer_offsets-31, __consumer_offsets-13]
2019-08-04 21:44:52.331  WARN 6 [  XNIO-1 task-3] k.c.NetworkClient$DefaultMetadataUpdater : [Consumer clientId=kafdrop-client, groupId=kafdrop-consumer-group] 50 partitions have leader brokers without a matching listener, including [__consumer_offsets-0, __consumer_offsets-10, __consumer_offsets-20, __consumer_offsets-40, __consumer_offsets-30, __consumer_offsets-39, __consumer_offsets-9, __consumer_offsets-11, __consumer_offsets-31, __consumer_offsets-13]
2019-08-04 21:44:52.433  WARN 6 [  XNIO-1 task-3] k.c.NetworkClient$DefaultMetadataUpdater : [Consumer clientId=kafdrop-client, groupId=kafdrop-consumer-group] 50 partitions have leader brokers without a matching listener, including [__consumer_offsets-0, __consumer_offsets-10, __consumer_offsets-20, __consumer_offsets-40, __consumer_offsets-30, __consumer_offsets-39, __consumer_offsets-9, __consumer_offsets-11, __consumer_offsets-31, __consumer_offsets-13]
2019-08-04 21:44:52.537  WARN 6 [  XNIO-1 task-3] k.c.NetworkClient$DefaultMetadataUpdater : [Consumer clientId=kafdrop-client, groupId=kafdrop-consumer-group] 50 partitions have leader brokers without a matching listener, including [__consumer_offsets-0, __consumer_offsets-10, __consumer_offsets-20, __consumer_offsets-40, __consumer_offsets-30, __consumer_offsets-39, __consumer_offsets-9, __consumer_offsets-11, __consumer_offsets-31, __consumer_offsets-13]
2019-08-04 21:44:52.639  WARN 6 [  XNIO-1 task-3] k.c.NetworkClient$DefaultMetadataUpdater : [Consumer clientId=kafdrop-client, groupId=kafdrop-consumer-group] 50 partitions have leader brokers without a matching listener, including [__consumer_offsets-0, __consumer_offsets-10, __consumer_offsets-20, __consumer_offsets-40, __consumer_offsets-30, __consumer_offsets-39, __consumer_offsets-9, __consumer_offsets-11, __consumer_offsets-31, __consumer_offsets-13]

I tried clearing my log.dir and restarting both kafka and zookeeper
I have successfully configured the networking part of the cluster with kafdrop instance.
these logs are the not changing.
when I reset the log.dir directory from all my kafka nodes The logs of kafdrop changes to
remove broker 0
add broker 0
respectively same thing for broker 1 and 2.

Need broker address when start kafdrop by jar?

I started kafdrop failed and it indicated that broker address cat't connect.

2019-08-25 21:52:51.946 INFO 421 [ main] o.a.k.c.c.AbstractConfig : AdminClientConfig values:
bootstrap.servers = [localhost:9092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS

2019-08-25 21:52:51.965 INFO 421 [ main] o.a.k.c.u.AppInfoParser$AppInfo : Kafka version: 2.2.1
2019-08-25 21:52:51.965 INFO 421 [ main] o.a.k.c.u.AppInfoParser$AppInfo : Kafka commitId: 55783d3133a5a49a
2019-08-25 21:52:51.987 INFO 421 [ main] k.s.BuildInfo : Kafdrop version: 3.9.0-SNAPSHOT, build time: 2019-08-24T18:07:26.130Z
2019-08-25 21:52:51.994 WARN 421 [| adminclient-1] o.a.k.c.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2019-08-25 21:52:52.021 INFO 421 [ChildrenCache-1] k.s.CuratorKafkaMonitor : Topic confi

Fail to display messages on topics with transaction markers

Hi
I'm testing KafkaDrop for the first time against Kafka 2.2. Everything seems to work and I can easily display messages on topics which have not been written with Kafka transaction.
But on some of our topics, all messages are written transactionally. Normally 2 messages on 2 different topics. So those topics contain the Kafka transaction markers ( as described here https://docs.google.com/document/d/11Jqy_GjUGtdXJK94XGsEIK7CP1SnQGdp2eF0wSw9ra8/edit#heading=h.mylukj7bg1rf)
It seems to me that KafDrop then fails to display ANY messages on these topics, at least the View Message screen remains empty. I can't see any error message in the log, though.
Can you tell me if this is just not currently supported (I might then have a look into it and try to come up with a PR)? Or am I doing something wrong?
Thanks, Joe

Remove dependency on ZooKeeper

By and large, the information we currently obtain from ZooKeeper can be had from the Kafka Admin API, sans a few attributes that are of little relevance (e.g. broker start time). There shouldn't be a compelling reason left to use ZK. Furthermore, some managed Kafka services disallow direct access to ZK for security reasons.

Once removed, also remove the ZooKeeper and Curator libraries.

Deprecate the zookeeper.connect property, printing a warning when it is used. (Eventually to be removed altogether.)

Unable to connect to local kafka broker

I have zookeeper and kafka running locally.
When i tried to start up kafdrop and connect to it, it fails with the following message.

[AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

This is my docker-compose

zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
    networks:
      - monitor-net
  kafka:
    build: .
    links:
     - zookeeper
    ports:
      - "9092:9092"
      - "7071:7071"
    environment:
      KAFKA_LISTENERS: PLAINTEXT://:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_OPTS: -javaagent:/usr/app/jmx_prometheus_javaagent.jar=7071:/usr/app/prom-jmx-agent-config.yml
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
        - monitor-net
kafdrop:
    image: obsidiandynamics/kafdrop
    ports:
      - "9000:9000"
    environment:
      KAFKA_BROKERCONNECT: localhost:9092
      JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication"
    depends_on:
      - kafka
    networks:
      - monitor-net

I am able to connect to the broker and fetch current list of topics and create new topic using the scripts from apache.

>bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
>bin/kafka-topics.sh --list --bootstrap-server localhost:9092

What am I missing?

Differnet host address for the Web UI.

Hi!

I've installed Docker container with Kafdrop on my remote Apache Kafka server.
How can I bind Web UI to a different address, other then 'localhost', say, public ip of a server?
The following script is used to start Kafdrop as Docker container:

docker run -d --rm -p 9000:9000 \ -e ZOOKEEPER_CONNECT=host:port,host:port \ -e KAFKA_BROKERCONNECT=host:port,host:port \ -e JVM_OPTS="-Xms32M -Xmx64M" \ -e SERVER_SERVLET_CONTEXTPATH="/" \ obsidiandynamics/kafdrop:latest

Thanks!

Docker container doesn't have a ENV to set the schema registry URL

docker registry description of available env variables

2019-09-07 11:19:43.034 ERROR 6 [ XNIO-1 task-10] i.u.s.a.LoggingExceptionHandler : UT005023: Exception handling request to /topic/trip/messages

org.springframework.web.util.NestedServletException: Request processing failed; nested exception is io.confluent.common.config.ConfigException: Invalid value null for configuration schema.registry.url: Expected a comma separated list.

JSON support for consumer data

Hello. So just want to say that we love this software. Very useful.

We just wanted to have consumer data in JSON.

When you make a call to /topic/

You get that in JSON, but it does not include the consumer group data table that you see in http:

Consumers

Group ID Combined Lag
**** 155632

Could this data be added to the JSON request for topic/?

Also could /consumer/ endpoint support JSON too. When sending Accept application/json it still sends HTML and not JSON.

Thanks

Cannot get SCRAM-SHA-256 authentication running

Hi,
I am using the latest Kafdrop Docker Image (Kafdrop 3.20.0) and i have a Kafka cluster configured to use scram-sha-256 without ssl. For Kafdrop i created the following kafka.properties file:

security.protocol=SASL_PLAINTEXT
sasl.method=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafdrop"
password="super-secret-password";

When i try to run the container it complains about a missing serviceName. But as far as i understand a serviceName is only needed for Kerberos.

Thanks, Roland

Startup logs:

Writing Kafka properties into kafka.properties
2020-01-20 15:35:42.719  INFO ${sys:PID} [           main] k.Kafdrop$EnvironmentSetupListener       : Initializing JAAS config
2020-01-20 15:35:42.731  INFO ${sys:PID} [           main] k.Kafdrop$EnvironmentSetupListener       : env: null .isSecured kafka: false
2020-01-20 15:35:42.731  INFO ${sys:PID} [           main] k.Kafdrop$EnvironmentSetupListener       : Env: null
2020-01-20 15:35:42.979  INFO 14 [           main] o.s.b.StartupInfoLogger                  : Starting application on 94a6e54d533b with PID 14 (started by root in /)
2020-01-20 15:35:42.981  INFO 14 [           main] o.s.b.SpringApplication                  : No active profile set, falling back to default profiles: default
2020-01-20 15:35:45.019  INFO 14 [           main] i.u.s.s.ServletContextImpl               : Initializing Spring embedded WebApplicationContext
2020-01-20 15:35:45.019  INFO 14 [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1993 ms
2020-01-20 15:35:45.628  INFO 14 [           main] k.c.KafkaConfiguration                   : Checking truststore file kafka.truststore.jks
2020-01-20 15:35:45.628  INFO 14 [           main] k.c.KafkaConfiguration                   : Checking keystore file kafka.keystore.jks
2020-01-20 15:35:45.628  INFO 14 [           main] k.c.KafkaConfiguration                   : Checking properties file kafka.properties
2020-01-20 15:35:45.629  INFO 14 [           main] k.c.KafkaConfiguration                   : Loading properties from kafka.properties
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.kafka.common.network.SaslChannelBuilder (file:/kafdrop-3.20.0/lib/kafka-clients-2.3.1.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.kafka.common.network.SaslChannelBuilder
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-01-20 15:35:45.710  WARN 14 [           main] o.s.c.s.AbstractApplicationContext       : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'aclController' defined in URL [jar:file:/kafdrop-3.20.0/kafdrop-3.20.0.jar!/BOOT-INF/classes!/kafdrop/controller/AclController.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'kafkaMonitorImpl' defined in URL [jar:file:/kafdrop-3.20.0/kafdrop-3.20.0.jar!/BOOT-INF/classes!/kafdrop/service/KafkaMonitorImpl.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaHighLevelConsumer': Invocation of init method failed; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
2020-01-20 15:35:45.728  INFO 14 [           main] ConditionEvaluationReportLoggingListener :

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2020-01-20 15:35:45.730 ERROR 14 [           main] o.s.b.SpringApplication                  : Application run failed

org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'aclController' defined in URL [jar:file:/kafdrop-3.20.0/kafdrop-3.20.0.jar!/BOOT-INF/classes!/kafdrop/controller/AclController.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'kafkaMonitorImpl' defined in URL [jar:file:/kafdrop-3.20.0/kafdrop-3.20.0.jar!/BOOT-INF/classes!/kafdrop/service/KafkaMonitorImpl.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaHighLevelConsumer': Invocation of init method failed; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
	at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769)
	at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:218)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1341)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1187)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:845)
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
	at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:140)
	at kafdrop.Kafdrop.main(Kafdrop.java:53)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:567)
	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:51)
	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52)
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'kafkaMonitorImpl' defined in URL [jar:file:/kafdrop-3.20.0/kafdrop-3.20.0.jar!/BOOT-INF/classes!/kafdrop/service/KafkaMonitorImpl.class]: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaHighLevelConsumer': Invocation of init method failed; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
	at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769)
	at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:218)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1341)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1187)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
	at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1251)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1171)
	at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857)
	at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760)
	... 26 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaHighLevelConsumer': Invocation of init method failed; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
	at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:139)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:414)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1770)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
	at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1251)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1171)
	at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857)
	at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760)
	... 40 more
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
	at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:827)
	at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:664)
	at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:644)
	at kafdrop.service.KafkaHighLevelConsumer.initializeClient(KafkaHighLevelConsumer.java:47)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:567)
	at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:363)
	at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:307)
	at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:136)
	... 53 more
Caused by: org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
	at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:160)
	at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
	at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:67)
	at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:99)
	at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:741)
	... 63 more
Caused by: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka config
	at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:301)
	at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:92)
	at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:60)
	at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:104)
	at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:149)
	... 67 more```

Allow setting the root context path

If I set the ingress path to /kafkdrop the UI doesn't load.

ingress:
  enabled: true
  path: /kafdrop

I believe this is because we'd also need to set the servlet context path for the Spring Boot app to

server.servlet.context-path=/kafdrop

Port selection not working

Even though specfy different port than 9000, still the kafdrop listen on 9000.

ps ax | grep -i "kafdrop.jar" | egrep -v "grep" | awk '{print $1}'
1199

netstat -plant | grep 9000
tcp6       0      0 :::9000                 :::*                    LISTEN      1199/java

netstat -plant | grep 9010
tcp6       0      0 :::9010                 :::*                    LISTEN      1199/java

List of consumers is incomplete

Hello,

I've noticed that not all consumer groups are listed in Kafdrop. If I look at Kafka Manager for the same topic, I can see all consumers (including temporary unnamed ones).

Adding auto-refresh, statistic about messages, charts, and additional information about consumers

I'm using your project and I found it very user-friendly and simple to configure and run.
Unfortunately, it does fit my monitoring requirements.

I believe that with few improvements this tool can be awesome.

Some enhancements are:

  • auto-refresh at a given interval. This will help the user to monitor the progress;
  • statistic about messages such as the rate for each consumer;
  • some charts to display the trends;
  • additional information about consumers groups.

Avro deserializer looking in /topic instead of /subjects

Hello,

I'm getting the following error message when trying to use our Avro registry server
UT005023: Exception handling request to /topic/events.jobs.errors/messages

I can make curl requests against our schema registry and I get valid JSON, but the path is under /subjects/events/jobs/errors and not /topic/events.jobs.errors/messages.

curl -X GET -i -H "Content-Type: application/vnd.schemaregistry.v1+json" http://[machine]:[port]/subjects/events/jobs/errors
HTTP/1.1 200 OK
Server: gunicorn/20.0.0
Date: Thu, 19 Dec 2019 22:26:40 GMT
Connection: close
Content-Type: application/json
Content-Length: 70

{"specs": ["fpe", "memory", "segfault", "si"], "version"...}

Am I missing a configuration setting for Kafdrop? I did not configure our Avro installation, so I'm a bit hamstrung with how it currently works.

Flexible SASL and SSL configuration with Docker support

There's currently an awkward way of configuring SASL by editing a local file. There is no current way of configuring properties for enabling SSL (TLS) support, providing truststores, keystores, etc.

Ideally, need a way of combing the above into a flexible configuration.

In addition, need a way of easily configuring this when using Docker and Kubernetes/Helm.

Once done, need to deprecate the old way of configuring SASL.

Create Topic & manage ACLs via kafdrop

I'm probably asking the obvious here, but are there are any plans to support creating topics and manage ACL rules or even zookeeper users (SCRAM)?

It seems to be the last missing piece in managing a kafka cluster :)

HTTPS for Kafdrop

Is there currently a way to connect to Kafdrop securely (using https instead of http)?

Usage in offline environments slow due to the use of google font in template

While most time it is a good and common idea to use fonts from google via links that point to google servers, it is quite slow if the server is not reachable from the docker container - connection timeout - will take 30 seconds for every page until time out is reached.

Maybe there is a way to include the font?

Will not attempt to authenticate using SASL (unknown error)

Deployed the kafdrop helm chart

Got this exception in logs when trying to connect to Confluent cp-helm-chart

I've tried both:

          zkConnect: kafka-cp-zookeeper:2888
          kafkaBrokerConnect: kafka-cp-kafka:9092
          zkConnect: kafka-cp-zookeeper-headless:2888
          kafkaBrokerConnect: kafka-cp-kafka-headless:9092

My Kafka cluster is not using SASL at the moment. Do I need to disable SASL in Kafadrop?

kubectl logs kafdrop-5c95cdddc5-l5nkq:

2019-06-28 09:54:13.962 �[32m INFO�[m �[35m6�[m [zookeeper:2888)] �[36mo.a.z.ClientCnxn$SendThread             �[m : Opening socket connection to server kafka-cp-zookeeper/10.51.246.35:2888. Will not attempt to authenticate using SASL (unknown error)
2019-06-28 09:54:16.632 �[31mERROR�[m �[35m6�[m [           main] �[36mo.a.c.ConnectionState                   �[m : Connection timed out for connection string (kafka-cp-zookeeper:2888) and timeout (15000) / elapsed (5000)
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:197) at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88) at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:116) at org.apache.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:489) at org.apache.curator.framework.imps.ExistsBuilderImpl$3.call(ExistsBuilderImpl.java:226) at org.apache.curator.framework.imps.ExistsBuilderImpl$3.call(ExistsBuilderImpl.java:215) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:108) at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForegroundStandard(ExistsBuilderImpl.java:212) at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:205) at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:168) at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:39) at org.apache.curator.framework.recipes.cache.NodeCache.start(NodeCache.java:172) at kafdrop.service.CuratorKafkaMonitor.start(CuratorKafkaMonitor.java:108) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:363) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:307) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:136) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:414) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1770) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1248) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1168) at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:218) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1341) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1187) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:843) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549) at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775) at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) at org.springframework.boot.SpringApplication.run(SpringApplication.java:316) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:139) at kafdrop.Kafdrop.main(Kafdrop.java:48) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)

LAG and HOST fields

Hi,

Can we integrate the LAG and HOST informations for each partition ? For example, we can easily see the blocking hosts.

AVRO Message JsonNode Not Found

Hi

Thanks for the excellent project.

Compiling from source or using the bintray .jar file (any of the recent versions) in our own Docker image results in " nested exception is java.lang.NoClassDefFoundError: org/codehaus/jackson/JsonNode " when trying to view AVRO messages using a Schema Registry.
When using the "DEFAULT" format, the messages show, but obviously serialised.

Trace as below;

errorAtts: {timestamp=Tue Oct 29 04:09:27 GMT 2019, status=500, error=Internal Server Error, message=Handler dispatch failed; nested exception is java.lang.NoClassDefFoundError: org/codehaus/jackson/JsonNode, trace=java.lang.NoClassDefFoundError: org/codehaus/jackson/JsonNode at kafdrop.util.AvroMessageDeserializer.getDeserializer(AvroMessageDeserializer.java:28) at kafdrop.util.AvroMessageDeserializer.<init>(AvroMessageDeserializer.java:15) at kafdrop.controller.MessageController.getDeserializer(MessageController.java:190) at kafdrop.controller.MessageController.viewMessageForm(MessageController.java:125) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:892) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1039) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897) at javax.servlet.http.HttpServlet.service(HttpServlet.java:645) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882) at javax.servlet.http.HttpServlet.service(HttpServlet.java:750) at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129) at kafdrop.config.CorsConfiguration$1.doFilter(CorsConfiguration.java:88) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:114) at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:104) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68) at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68) at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132) at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:269) at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:78) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:133) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:130) at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:249) at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:78) at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:99) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:376) at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:835) Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.JsonNode at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ... 80 more , path=/topic/md-ack-events/messages}

Thanks

TimeoutException : Timed out waiting for a node assignment

Hi,

I packaged and ran it from my local trying to connect my Organization's test cluster using the following standalone java jar run. Kindly help setup with proper config. I tried the same command without adding the client.jaas auth argument giving me same result.

java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED
-jar target/kafdrop-3.18.0-SNAPSHOT.jar
--kafka.brokerConnect=host1:port1,host2:port2,host3:port3,host4:port4,host5:port5
-Djava.security.auth.login.config=~/Desktop/client_jaas.conf

org.springframework.web.util.NestedServletException: Request processing failed; nested exception is kafdrop.service.KafkaAdminClientException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1013)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:645)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
at kafdrop.config.CorsConfiguration$1.doFilter(CorsConfiguration.java:88)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:88)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:114)
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:104)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:269)
at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:78)
at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:133)
at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:130)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:249)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:78)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:99)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:376)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

Kubernetes - Connection to Zookeeper timeout

I was starting with the docker-compose in the example which works, but then moved to kubernetes which leads to an error:

2019-07-02 09:48:48.442  INFO 9 [ter.local:2181)] o.a.z.ClientCnxn$SendThread              : Opening socket connection to server kafka-zoo-service.default.svc.cluster.local/10.101.40.207:2181. Will not attempt to authenticate using SASL (unknown error)
2019-07-02 09:48:53.305  WARN 9 [ChildrenCache-1] o.a.c.ConnectionState                    : Connection attempt unsuccessful after 17185 (greater than max timeout of 15000). Resetting connection and trying again with a new connection.
2019-07-02 09:48:53.309  WARN 9 [ter.local:2181)] o.a.z.ClientCnxn$SendThread              : Client session timed out, have not heard from server in 5868ms for sessionid 0x0
2019-07-02 09:48:53.415  INFO 9 [ChildrenCache-1] o.a.z.ZooKeeper                          : Session: 0x0 closed
2019-07-02 09:48:53.415  INFO 9 [ain-EventThread] o.a.z.ClientCnxn$EventThread             : EventThread shut down for session: 0x0
2019-07-02 09:48:53.416  INFO 9 [ChildrenCache-1] o.a.z.ZooKeeper                          : Initiating client connection, connectString=kafka-zoo-service.default.svc.cluster.local:2181 sessionTimeout=5000 watcher=org.apache.curator.ConnectionState@6f3f0ae
2019-07-02 09:48:53.417  INFO 9 [ChildrenCache-1] o.a.z.ClientCnxnSocket                   : jute.maxbuffer value is 4194304 Bytes
2019-07-02 09:48:53.417  INFO 9 [ChildrenCache-1] o.a.z.ClientCnxn                         : zookeeper.request.timeout value is 0. feature enabled=
2019-07-02 09:48:53.420  INFO 9 [ter.local:2181)] o.a.z.ClientCnxn$SendThread              : Opening socket connection to server kafka-zoo-service.default.svc.cluster.local/10.101.40.207:2181. Will not attempt to authenticate using SASL (unknown error)
2019-07-02 09:48:58.424  WARN 9 [ter.local:2181)] o.a.z.ClientCnxn$SendThread              : Client session timed out, have not heard from server in 5005ms for sessionid 0x0
2019-07-02 09:48:58.425  INFO 9 [ter.local:2181)] o.a.z.ClientCnxn$SendThread              : Client session timed out, have not heard from server in 5005ms for sessionid 0x0, closing socket connection and attempting reconnect

I want to change these timeout values
https://github.com/obsidiandynamics/kafdrop/blob/master/src/main/java/kafdrop/config/CuratorConfiguration.java#L55

Is there a way to change these by environment variables or do you have any other hints?

Build fails (CentOS 7)

java -version

openjdk version "11.0.4" 2019-07-16 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.4+11-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.4+11-LTS, mixed mode, sharing)

mvn clean package

[INFO] Scanning for projects...
[WARNING]
[WARNING] Some problems were encountered while building the effective model for com.obsidiandynamics.kafdrop:kafdrop:jar:3.11.0-SNAPSHOT
[WARNING] 'dependencyManagement.dependencies.dependency.exclusions.exclusion.artifactId' for org.quartz-scheduler:quartz:jar with value '*' does not match a valid id pattern. @ org.springframework.boot:spring-boot-dependencies:2.1.5.RELEASE, /root/.m2/repository/org/springframework/boot/spring-boot-dependencies/2.1.5.RELEASE/spring-boot-dependencies-2.1.5.RELEASE.pom, line 2608, column 25
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING]
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building kafdrop 3.11.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ kafdrop ---
[INFO] Deleting /root/kafdrop/target
[INFO]
[INFO] --- maven-resources-plugin:2.7:copy-resources (prepare-dockerfile) @ kafdrop ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- spring-boot-maven-plugin:2.1.5.RELEASE:build-info (build-info) @ kafdrop ---
[INFO]
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ kafdrop ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 12 resources
[INFO] Copying 37 resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ kafdrop ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 42 source files to /root/kafdrop/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] javac: invalid target release: 11
Usage: javac <options> <source files>
use -help for a list of possible options

[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.770s
[INFO] Finished at: Sun Sep 29 17:50:59 CEST 2019
[INFO] Final Memory: 21M/178M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project kafdrop: Compilation failure
[ERROR] javac: invalid target release: 11
[ERROR] Usage: javac <options> <source files>
[ERROR] use -help for a list of possible options
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

alternatives --config java

There are 2 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*  1           java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-1.el7_7.x86_64/jre/bin/java)
 + 2           java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.4.11-1.el7_7.x86_64/bin/java)

So 11 is active, while 8 is also on the machine

Configuration needed to connect to secured cluster

Hi all,

I want to connect to a kafka cluster hosted by aiven.io, i have a trustore-file, a keystore-file, and their respectives passwords.

I made a little docker-compose and provide env variables with the BASE64 encoded files.

version: "3"

services:
  kafdrop:
    image: obsidiandynamics/kafdrop
    ports:
      - "9000:9000"
    environment: 
      KAFKA_BROKERCONNECT: "mybroker..aivencloud.com:24117"
      KAFKA_PROPERTIES: "c3Nblablablrcw=="
      KAFKA_TRUSTSTORE: "/u3blablabla=="
      KAFKA_KEYSTORE: "/u3+7blablabladyLfz/y"

I have a message in my logs;

kafdrop_1  | 2019-10-10 12:24:25.663  WARN 13 [           main] o.a.k.c.c.AbstractConfig                 : The configuration 'ssl.truststore.location' was supplied but isn't a known config.
kafdrop_1  | 2019-10-10 12:24:25.663  WARN 13 [           main] o.a.k.c.c.AbstractConfig                 : The configuration 'ssl.keystore.password' was supplied but isn't a known config.
kafdrop_1  | 2019-10-10 12:24:25.664  WARN 13 [           main] o.a.k.c.c.AbstractConfig                 : The configuration 'ssl.keystore.location' was supplied but isn't a known config.
kafdrop_1  | 2019-10-10 12:24:25.664  WARN 13 [           main] o.a.k.c.c.AbstractConfig                 : The configuration 'ssl.truststore.password' was supplied but isn't a known config.

Note : my kafka.properties is very simple

ssl.keystore.password=blablabla
ssl.truststore.password=blablabla

I didn't know what have i made wrong, can someone help me ? Have you ever been connected to a secured cluster using docker image and such files ?

Custom Config not showing on topics.

Hi,

When creating topics I am setting config values (segment.bytes, cleanup.policy etc) but these values do not appear in the configuration section for the topic and the custom config column is set to "no". They would have previously shown when using the Home Advisor Kafdrop. Has something changed here?

Thanks

Looking to turn off any non-read operations. AKA, lock it down to GETS

Environment:
Kuberneties on cent7. Running Version 3.18.1 of kafdrop behind an nginx ingress

Love this tool, but would like to lock down the non-read operations (thus far, creating topics). I basically have a server config rule (in ingress file) to block that path, but is there a flag I can set to block those kind of operations as you might be expanding in the future to allow that sort of functionality. Any advice on how to do so?

Thanks a ton!

Usage of "Github Star" in Kafdrop web UI

I was surprised to see that Kafdrop attempts to connect to the public Internet. It seems to be accessing the Github Star thing via HTTP call.

Is there a reason for keeping this as part of the web UI? To me it seems a red flag. The nature of Kafdrop is to serve as an "Admin UI" for Kafka. This means that for any serious production usage, it shouldn't be attempting to connect to the public Internet at all.

Please consider removing this functionality.

Usage of JMX enabled in Kafdrop3

I can see jmx property in the source code. How is it being used (considering broker are started with JMX enabled at some port)?
How can I take advantage of JMX metrics using Kafdrop?

Old version Kafka compatibility problem

Hey guys, does Kafdrop support old version Kafka 0.10.1.0 ?

The requirements says Kafdrop support Kafka (version 0.10.0 or newer), I tested with compiled kafdrop-3.19.0-SNAPSHOT.jar, it seems unsupported (UnsupportedVersionException occured )...

Is there any way to let Kafdrop be compatible with old version Kafka ? Something like setting zookeeper configurations ?

...
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is kafdrop.service.KafkaAdminClientException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.UnsupportedVersionException: The broker does not support DESCRIBE_CONFIGS
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1013)
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:645)
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:750)
	at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
	at kafdrop.config.CorsConfiguration$1.doFilter(CorsConfiguration.java:88)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:88)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:118)
	at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
	at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
	at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)
        ...
  • OS: CentOS 7 Linux Kernel 3.10.0-514.16.1.el7.x86_64
  • JDK: OpenJDK 13.0.1-b9

Error with SSL auth+encryption: Value of 'env' cannot be null if connecting to secured kafka.

I am trying to setup Kafdrop with SSL auth and encryption with brokers.

My Kafka cluster (3 broker are now listening for SSL and PLAINTEXT, for Zookeeper, connections) is now setup with their respective keystore & truststore file.

In Kafdrop, I've placed the same keystore & truststore file in the root dir.

kafka.properties:

security.protocol=SSL
ssl.endpoint.identification.algorithm=
ssl.truststore.location=/generic-server-truststore.jks
ssl.truststore.password=trustkafka
ssl.keystore.location=/kafka1-server-keystore.jks
ssl.keystore.password=kafkakey

application.yml:

kafka:
  brokerConnect: localhost:9082,localhost:9083,localhost:9084
  isSecured: true
  #saslMechanism: "SASL"
  securityProtocol: "SSL"
  truststoreLocation : "/generic-server-truststore.jks"
  propertiesFileLocation : "/kafka.properties}"
  keystoreLocation : "/kafka1-server-keystore.jks"

But, on startup I get the error:
Value of 'env' cannot be null if connecting to secured kafka.

  1. Where is the kafka.env property specified?
  2. Do we have to specify it? If yes, then why?
  3. Why is SASL mandatory? Can I not use SSL for both, Auth & encryption?

Please let me know what I'm doing wrong here.

Incorrectly connecting to Zookeer at localhost

Hello,

I'm trying this out against one of our environments kafka clusters using docker:
winpty docker run -it --rm -p 9002:9000 -e KAFKA_BROKERCONNECT=x.x.x.11:6667,x.x.x.12:6667 -e JVM_OPTS="-Xms32M -Xmx64M" obsidiandynamics/kafdrop:latest

I'm seeing it start then try connecting to Zookeeper at localhost in a constant loop:

2020-01-03 19:05:21.636 INFO 15 [ main] o.a.z.Environment : Client environment:os.memory.total=49MB
2020-01-03 19:05:21.647 INFO 15 [ main] o.a.z.ZooKeeper : Initiating client connection, connectString=localhost:2181 sessionTimeout=5000 watcher=org.apache.curator.ConnectionState@781711b7
2020-01-03 19:05:21.686 INFO 15 [ main] o.a.z.c.X509Util : Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2020-01-03 19:05:21.709 INFO 15 [ main] o.a.z.ClientCnxnSocket : jute.maxbuffer value is 4194304 Bytes
2020-01-03 19:05:21.738 INFO 15 [ main] o.a.z.ClientCnxn : zookeeper.request.timeout value is 0. feature enabled=
2020-01-03 19:05:21.801 INFO 15 [localhost:2181)] o.a.z.ClientCnxn$SendThread : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2020-01-03 19:05:21.834 INFO 15 [localhost:2181)] o.a.z.ClientCnxn$SendThread : Socket error occurred: localhost/127.0.0.1:2181: Connection refused
2020-01-03 19:05:22.170 INFO 15 [ main] k.s.BuildInfo : Kafdrop version: 3.9.0, build time: 2019-09-26T09:17:02.688Z

Is localhost a default for the Zookeeper server someplace? I tried searching through the code, but I don't see where it might be hard coded.

Error connecting to node kafka-0.kafka.default.svc.cluster.local:9092 (id: 0 rack: null)

Hi

I am running below command -
java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED -jar target/kafdrop-3.9.0-SNAPSHOT.jar --zookeeper.connect=: --kafka.brokerConnect=:

but getting the error
2019-09-07 20:55:41.795 WARN 16276 [ XNIO-1 task-2] o.a.k.c.NetworkClient : [Consumer clientId=kafdrop-client, groupId=kafdrop-consumer-group] Error connecting to node kafka-0.kafka.default.svc.cluster.local:9092 (id: 0 rack: null)

can you just help me what i a missing here

ACL empty

Hi thanks for this awesome tool. Can you tell me what configuration I should have to use ACL page.
Wish a reply.

Best regards
Ismael

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.