Giter Club home page Giter Club logo

kafka-ui-charts's Introduction

kafka-ui-charts

UI For Apache Kafka Helm Charts

kafka-ui-charts's People

Contributors

daviddyball avatar haarolean avatar narekmat avatar nbouron avatar skooni avatar urupaud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-ui-charts's Issues

Support for OpenShift routes in Helm Chart

Issue submitter TODO list

  • I've searched for an already existing issues here
  • I'm running a supported version of the application which is listed here and the feature is not present there

Is your proposal related to a problem?

No response

Describe the feature you're interested in

The current Helm Chart provides a way to deploy an ingress resource to allow communications to the Kafka-UI instance externally from an OpenShift platform. OpenShift also provides another object to expose internal services to external users: a route allows to host an application to a publich URL, in the same way as an ingress object.

The differences from between objects are described (in a high-level) here.

As OpenShift is very extended as a Kubernetes platform for many users, it could be a good idea to allow users to use OpenShift route to expose the Kafka-UI services. Sometimes the OpenShift users prefer to use route instead of ingress to expose services to external users. This enhacement could support them in that case.

Describe alternatives you've considered

No response

Version you're running

b0c367c v.0.6.2

Additional context

No response

Periodic random kafka-ui crashes.

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

Hello everyone. I'm using the latest version of Kafka UI chart - 0.7.5, k8s version 1.27. The application deployment seems to be working fine, but from time to time, it crashes inexplicably. After a while, it starts working again on its own. Restarting the pod doesn't help. There are no errors in the logs either.
This is the example of logs from start to crash:

12:22:43,654 |-INFO in ch.qos.logback.classic.LoggerContext[default] - This is logback-classic version 1.4.7
12:22:44,750 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
12:22:44,840 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.xml]
12:22:45,040 |-INFO in ch.qos.logback.classic.BasicConfigurator@15ff3e9e - Setting up default configuration.
12:23:04,351 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@5fdcaa40 - URL [jar:file:/kafka-ui-api.jar!/BOOT-INF/classes!/logback-spring.xml] is not of type file
12:23:08,847 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - Processing appender named [STDOUT]
12:23:08,940 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
12:23:09,946 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - This appender no longer admits a layout as a sub-component, set an encoder instead.
12:23:09,946 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - To ensure compatibility, wrapping your layout in LayoutWrappingEncoder.
12:23:09,946 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - See also http://logback.qos.ch/codes.html#layoutInsteadOfEncoder for details
12:23:09,948 |-INFO in ch.qos.logback.classic.model.processor.RootLoggerModelHandler - Setting level of ROOT logger to INFO
12:23:09,951 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator@6dc17b83 - Propagating INFO level on Logger[ROOT] onto the JUL framework
12:23:10,040 |-INFO in ch.qos.logback.core.model.processor.AppenderRefModelHandler - Attaching appender named [STDOUT] to Logger[ROOT]
12:23:10,040 |-INFO in ch.qos.logback.core.model.processor.DefaultProcessor@5e0826e7 - End of configuration.
12:23:10,040 |-INFO in org.springframework.boot.logging.logback.SpringBootJoranConfigurator@32eff876 - Registering current configuration as safe fallback point

 _   _ ___    __             _                _          _  __      __ _
| | | |_ _|  / _|___ _ _    /_\  _ __ __ _ __| |_  ___  | |/ /__ _ / _| |_____
| |_| || |  |  _/ _ | '_|  / _ \| '_ / _` / _| ' \/ -_) | ' </ _` |  _| / / _`|
 \___/|___| |_| \___|_|   /_/ \_| .__\__,_\__|_||_\___| |_|\_\__,_|_| |_\_\__,|
                                 |_|

2023-12-18 12:23:13,854 INFO  [main] c.p.k.u.KafkaUiApplication: Starting KafkaUiApplication using Java 17.0.6 with PID 1 (/kafka-ui-api.jar started by kafkaui in /)
2023-12-18 12:23:13,943 DEBUG [main] c.p.k.u.KafkaUiApplication: Running with Spring Boot v3.0.6, Spring v6.0.8
2023-12-18 12:23:13,944 INFO  [main] c.p.k.u.KafkaUiApplication: No active profile set, falling back to 1 default profile: "default"

OR

12:21:43,648 |-INFO in ch.qos.logback.classic.LoggerContext[default] - This is logback-classic version 1.4.7
12:21:44,650 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
12:21:44,745 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.xml]
12:21:44,845 |-INFO in ch.qos.logback.classic.BasicConfigurator@15ff3e9e - Setting up default configuration.
12:22:03,148 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@5fdcaa40 - URL [jar:file:/kafka-ui-api.jar!/BOOT-INF/classes!/logback-spring.xml] is not of type file
12:22:05,146 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - Processing appender named [STDOUT]
12:22:05,147 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
12:22:05,651 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - This appender no longer admits a layout as a sub-component, set an encoder instead.
12:22:05,651 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - To ensure compatibility, wrapping your layout in LayoutWrappingEncoder.
12:22:05,651 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - See also http://logback.qos.ch/codes.html#layoutInsteadOfEncoder for details
12:22:05,652 |-INFO in ch.qos.logback.classic.model.processor.RootLoggerModelHandler - Setting level of ROOT logger to INFO
12:22:05,652 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator@6dc17b83 - Propagating INFO level on Logger[ROOT] onto the JUL framework
12:22:05,654 |-INFO in ch.qos.logback.core.model.processor.AppenderRefModelHandler - Attaching appender named [STDOUT] to Logger[ROOT]
12:22:05,654 |-INFO in ch.qos.logback.core.model.processor.DefaultProcessor@5e0826e7 - End of configuration.
12:22:05,655 |-INFO in org.springframework.boot.logging.logback.SpringBootJoranConfigurator@32eff876 - Registering current configuration as safe fallback point

 _   _ ___    __             _                _          _  __      __ _
| | | |_ _|  / _|___ _ _    /_\  _ __ __ _ __| |_  ___  | |/ /__ _ / _| |_____
| |_| || |  |  _/ _ | '_|  / _ \| '_ / _` / _| ' \/ -_) | ' </ _` |  _| / / _`|
 \___/|___| |_| \___|_|   /_/ \_| .__\__,_\__|_||_\___| |_|\_\__,_|_| |_\_\__,|
                                 |_|

2023-12-18 12:22:08,949 INFO  [main] c.p.k.u.KafkaUiApplication: Starting KafkaUiApplication using Java 17.0.6 with PID 1 (/kafka-ui-api.jar started by kafkaui in /)
2023-12-18 12:22:08,958 DEBUG [main] c.p.k.u.KafkaUiApplication: Running with Spring Boot v3.0.6, Spring v6.0.8
2023-12-18 12:22:08,961 INFO  [main] c.p.k.u.KafkaUiApplication: No active profile set, falling back to 1 default profile: "default"
2023-12-18 12:24:06,941 DEBUG [main] c.p.k.u.s.SerdesInitializer: Configuring serdes for cluster kafka-prod-cluster

It's completely unclear what the cause is and how to resolve this. Any ideas?

Expected behavior

Should be working all the time.

Your installation details

chart 0.7.5.
values.yaml:

existingSecret: "kafka-ui"

envs:
  secret: {}
  config:
    KAFKA_CLUSTERS_0_NAME: "kafka-dev-cluster"
    KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: "kafka-controller-0.kafka-controller-headless.dev.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.dev.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.dev.svc.cluster.local:9092"
    KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL: SASL_PLAINTEXT
    KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM: PLAIN
    KAFKA_CLUSTERS_0_SCHEMAREGISTRY: "http://schema-registry.dev.svc.cluster.local:8081"
    AUTH_TYPE: "LOGIN_FORM"

ingress:
  enabled: false

resources:
  limits:
    cpu: 200m
    memory: 512Mi
  requests:
    cpu: 200m
    memory: 256Mi 

Steps to reproduce

I don't know how to reproduce this. The configuration is fine, but sometimes the application starts crashing suddenly.

Screenshots

No response

Logs

Error with DEBUG logging:

2023-12-18 12:38:53,445 DEBUG [SpringApplicationShutdownHook] o.s.c.s.DefaultLifecycleProcessor: Bean 'webServerStartStop' completed its stop procedure
2023-12-18 12:38:53,445 DEBUG [SpringApplicationShutdownHook] o.s.c.s.DefaultLifecycleProcessor: Stopping beans in phase -2147483647
2023-12-18 12:38:53,446 DEBUG [SpringApplicationShutdownHook] o.s.c.s.DefaultLifecycleProcessor: Bean 'springBootLoggingLifecycle' completed its stop procedure
2023-12-18 12:38:53,447 DEBUG [SpringApplicationShutdownHook] o.s.s.c.ThreadPoolTaskExecutor: Shutting down ExecutorService 'applicationTaskExecutor'
2023-12-18 12:38:53,448 DEBUG [SpringApplicationShutdownHook] o.s.s.c.ThreadPoolTaskScheduler: Shutting down ExecutorService 'taskScheduler'
2023-12-18 12:38:53,451 DEBUG [SpringApplicationShutdownHook] o.s.j.e.MBeanExporter: Unregistering JMX-exposed beans on shutdown
2023-12-18 12:38:53,747 ERROR [scheduling-1] o.s.s.s.TaskUtils$LoggingErrorHandler: Unexpected error occurred in scheduled task
reactor.core.Exceptions$ReactiveException: java.lang.InterruptedException
	at reactor.core.Exceptions.propagate(Exceptions.java:408)
	at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:91)
	at reactor.core.publisher.Mono.block(Mono.java:1710)
	at com.provectus.kafka.ui.service.ClustersStatisticsScheduler.updateStatistics(ClustersStatisticsScheduler.java:30)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
	at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
	at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.InterruptedException: null
	at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1048)
	at java.base/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:230)
	at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:87)
	... 14 common frames omitted
2023-12-18 12:38:55,646 DEBUG [reactor-http-epoll-3] i.n.b.PoolThreadCache: Freed 14 thread-local buffer(s) from thread: reactor-http-epoll-3
2023-12-18 12:38:55,742 DEBUG [reactor-http-epoll-1] i.n.b.PoolThreadCache: Freed 1 thread-local buffer(s) from thread: reactor-http-epoll-1
2023-12-18 12:38:55,744 DEBUG [reactor-http-epoll-2] i.n.b.PoolThreadCache: Freed 5 thread-local buffer(s) from thread: reactor-http-epoll-2
2023-12-18 12:38:55,748 DEBUG [reactor-http-epoll-4] i.n.b.PoolThreadCache: Freed 1 thread-local buffer(s) from thread: reactor-http-epoll-4

Additional context

No response

kafak-ui doesn't work when ingress path is set to non-root path

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

We are using AWS EKS, our ingress controller is aws-load-balancer-controller. We are doing path based fanout like maydomain.com/kafkaui, however once I set ingress path to non-root path I get 404 NOT_FOUND.

Working ingress configuration in values.yaml


ingress:
  enabled: true
  ingressClassName: "alb"
  path: /
  pathType: "Prefix"  
  host: mydomain.com

Non-working ingress configuration in values.yaml

ingress:
  enabled: true
  ingressClassName: "alb"
  path: /kafkaui
  pathType: "Prefix"  
  host: mydomain.com

Expected behavior

kafka-ui should work for non-root paths as well.

Your installation details

  • appVersion: "0.7.5"
  • Chart version : "0.7.5"
  • command to install chart

helm install kafka-ui-dev kafka-ui/kafka-ui -f values.dev.yaml -n my-namespace --version 0.7.5

values.dev.yaml

yamlApplicationConfig:
  kafka:
    clusters:
      - name: yaml
        bootstrapServers:  serve-1:9092,b-2.server-2:9092
  auth:
    type: disabled
  management:
    health:
      ldap:
        enabled: false

service:
  type: NodePort
  port: 8080

ingress:
  enabled: true
  ingressClassName: "alb"
  annotations: 
    alb.ingress.kubernetes.io/certificate-arn: <cert-ARN>
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/group.name: <my-lb-gp>
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/ssl-redirect: '443' 
  path: /kafakui
  pathType: "Prefix"  
  host: mydomain.com

serviceAccount:
  create: true
envs:
  config: 
    DYNAMIC_CONFIG_ENABLED: 'true'

Steps to reproduce

Do helm install with custom path in ingress

Screenshots

No response

Logs

{"code":5000,"message":"404 NOT_FOUND","timestamp":1697496635894,"requestId":"8f4b35f6-289","fieldsErrors":null,"stackTrace":"org.springframework.web.server.ResponseStatusException: 404 NOT_FOUND\n\tat org.springframework.web.reactive.resource.ResourceWebHandler.lambda$handle$1(ResourceWebHandler.java:406)\n\tSuppressed: The stacktrace has been enhanced by Reactor, refer to additional information below: \nError has been observed at the following site(s):\n\t*__checkpoint ⇢ com.provectus.kafka.ui.config.CorsGlobalConfiguration$$Lambda$996/0x0000000801639bc0 [DefaultWebFilterChain]\n\t*__checkpoint ⇢ com.provectus.kafka.ui.config.CustomWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ com.provectus.kafka.ui.config.ReadOnlyModeFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ AuthorizationWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ ExceptionTranslationWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ LogoutWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ ServerRequestCacheWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ SecurityContextServerWebExchangeWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ ReactorContextWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ HttpHeaderWriterWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ ServerWebExchangeReactorContextWebFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ org.springframework.security.web.server.WebFilterChainProxy [DefaultWebFilterChain]\n\t*__checkpoint ⇢ org.springframework.web.filter.reactive.ServerHttpObservationFilter [DefaultWebFilterChain]\n\t*__checkpoint ⇢ HTTP GET \"/kafkaui\" [ExceptionHandlingWebHandler]\nOriginal Stack Trace:\n\t\tat org.springframework.web.reactive.resource.ResourceWebHandler.lambda$handle$1(ResourceWebHandler.java:406)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:44)\n\t\tat reactor.core.publisher.Mono.subscribe(Mono.java:4485)\n\t\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onComplete(FluxSwitchIfEmpty.java:82)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onComplete(MonoFlatMap.java:189)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onComplete(MonoNext.java:102)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.onComplete(FluxConcatMapNoPrefetch.java:240)\n\t\tat reactor.core.publisher.FluxIterable$IterableSubscription.slowPath(FluxIterable.java:357)\n\t\tat reactor.core.publisher.FluxIterable$IterableSubscription.request(FluxIterable.java:294)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.request(FluxConcatMapNoPrefetch.java:336)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.request(MonoNext.java:108)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2341)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2215)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onSubscribe(MonoNext.java:70)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.onSubscribe(FluxConcatMapNoPrefetch.java:164)\n\t\tat reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:201)\n\t\tat reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:83)\n\t\tat reactor.core.publisher.Mono.subscribe(Mono.java:4485)\n\t\tat reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.subscribeNext(MonoIgnoreThen.java:263)\n\t\tat reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:51)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:165)\n\t\tat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)\n\t\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:82)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.innerNext(FluxConcatMapNoPrefetch.java:258)\n\t\tat reactor.core.publisher.FluxConcatMap$ConcatMapInner.onNext(FluxConcatMap.java:863)\n\t\tat reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)\n\t\tat reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2545)\n\t\tat reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:171)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.request(Operators.java:2305)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.request(FluxConcatMapNoPrefetch.java:338)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.request(MonoNext.java:108)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2341)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2215)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onSubscribe(MonoNext.java:70)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.onSubscribe(FluxConcatMapNoPrefetch.java:164)\n\t\tat reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:201)\n\t\tat reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:83)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDeferContextual.subscribe(MonoDeferContextual.java:55)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)\n\t\tat reactor.core.publisher.Mono.subscribe(Mono.java:4485)\n\t\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onComplete(FluxSwitchIfEmpty.java:82)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onComplete(MonoPeekTerminal.java:299)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onComplete(MonoPeekTerminal.java:299)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:155)\n\t\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)\n\t\tat reactor.core.publisher.FluxFilter$FilterSubscriber.onNext(FluxFilter.java:113)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:180)\n\t\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:180)\n\t\tat reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onNext(FluxDefaultIfEmpty.java:122)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:82)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.innerNext(FluxConcatMapNoPrefetch.java:258)\n\t\tat reactor.core.publisher.FluxConcatMap$ConcatMapInner.onNext(FluxConcatMap.java:863)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158)\n\t\tat reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)\n\t\tat reactor.core.publisher.FluxFilterFuseable$FilterFuseableSubscriber.onNext(FluxFilterFuseable.java:118)\n\t\tat reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2545)\n\t\tat reactor.core.publisher.FluxFilterFuseable$FilterFuseableSubscriber.request(FluxFilterFuseable.java:191)\n\t\tat reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:171)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.request(Operators.java:2305)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.request(FluxConcatMapNoPrefetch.java:338)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.request(MonoNext.java:108)\n\t\tat reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.request(FluxDefaultIfEmpty.java:98)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.request(MonoPeekTerminal.java:139)\n\t\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.request(FluxPeekFuseable.java:437)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.request(MonoPeekTerminal.java:139)\n\t\tat reactor.core.publisher.FluxFilter$FilterSubscriber.request(FluxFilter.java:186)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2341)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2215)\n\t\tat reactor.core.publisher.FluxFilter$FilterSubscriber.onSubscribe(FluxFilter.java:85)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onSubscribe(MonoPeekTerminal.java:152)\n\t\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onSubscribe(FluxPeekFuseable.java:471)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onSubscribe(MonoPeekTerminal.java:152)\n\t\tat reactor.core.publisher.Operators$BaseFluxToMonoOperator.onSubscribe(Operators.java:2025)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onSubscribe(MonoNext.java:70)\n\t\tat reactor.core.publisher.FluxConcatMapNoPrefetch$FluxConcatMapNoPrefetchSubscriber.onSubscribe(FluxConcatMapNoPrefetch.java:164)\n\t\tat reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:201)\n\t\tat reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:83)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDeferContextual.subscribe(MonoDeferContextual.java:55)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)\n\t\tat reactor.core.publisher.Mono.subscribe(Mono.java:4485)\n\t\tat reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.subscribeNext(MonoIgnoreThen.java:263)\n\t\tat reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:51)\n\t\tat reactor.core.publisher.Mono.subscribe(Mono.java:4485)\n\t\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onComplete(FluxSwitchIfEmpty.java:82)\n\t\tat reactor.core.publisher.FluxFilter$FilterSubscriber.onComplete(FluxFilter.java:166)\n\t\tat reactor.core.publisher.FluxPeekFuseable$PeekConditionalSubscriber.onComplete(FluxPeekFuseable.java:940)\n\t\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onComplete(FluxSwitchIfEmpty.java:85)\n\t\tat reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2547)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2341)\n\t\tat reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2215)\n\t\tat reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)\n\t\tat reactor.core.publisher.Mono.subscribe(Mono.java:4485)\n\t\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onComplete(FluxSwitchIfEmpty.java:82)\n\t\tat reactor.core.publisher.MonoNext$NextSubscriber.onComplete(MonoNext.java:102)\n\t\tat reactor.core.publisher.FluxFilter$FilterSubscriber.onComplete(FluxFilter.java:166)\n\t\tat reactor.core.publisher.FluxFlatMap$FlatMapMain.checkTerminated(FluxFlatMap.java:847)\n\t\tat reactor.core.publisher.FluxFlatMap$FlatMapMain.drainLoop(FluxFlatMap.java:609)\n\t\tat reactor.core.publisher.FluxFlatMap$FlatMapMain.drain(FluxFlatMap.java:589)\n\t\tat reactor.core.publisher.FluxFlatMap$FlatMapMain.onComplete(FluxFlatMap.java:466)\n\t\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onComplete(FluxPeekFuseable.java:277)\n\t\tat reactor.core.publisher.FluxIterable$IterableSubscription.slowPath(FluxIterable.java:357)\n\t\tat reactor.core.publisher.FluxIterable$IterableSubscription.request(FluxIterable.java:294)\n\t\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.request(FluxPeekFuseable.java:144)\n\t\tat reactor.core.publisher.FluxFlatMap$FlatMapMain.onSubscribe(FluxFlatMap.java:371)\n\t\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onSubscribe(FluxPeekFuseable.java:178)\n\t\tat reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:201)\n\t\tat reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:83)\n\t\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\t\tat reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)\n\t\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:165)\n\t\tat reactor.core.publisher.Operators$BaseFluxToMonoOperator.completePossiblyEmpty(Operators.java:2071)\n\t\tat reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onComplete(FluxDefaultIfEmpty.java:134)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)\n\t\tat reactor.core.publisher.FluxFilter$FilterSubscriber.onComplete(FluxFilter.java:166)\n\t\tat reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:275)\n\t\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1840)\n\t\tat reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)\n\t\tat reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)\n\t\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:200)\n\t\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)\n\t\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:180)\n\t\tat reactor.core.publisher.MonoPublishOn$PublishOnSubscriber.run(MonoPublishOn.java:181)\n\t\tat reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)\n\t\tat reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)\n\t\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\t\tat java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)\n\t\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\t\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\t\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}

Additional context

No response

Helm chart: Add service.labels

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

Currently, there is no way to specify labels for the kafka ui service.
This can be handy if you are working with label selectors or further forwarding automation on the service.

Expected behavior

A way to specify labels on the kubernetes service

Your installation details

Helm chart 0.7.2

Steps to reproduce

Install with helm or see detail in the corresponding directory helm template .

Screenshots

No response

Logs

No response

Additional context

No response

Config maps don't apply when using a differen OS base image

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

( I've supplied the solution below but filling in the form :-) )

I'm running a debian based zulu-openjdk-17 OS version because Alpine doesn't do DNS in Kubernetes.

When trying to use the inline-configmaps or configmaps for the config I'm unable to. It no longer works for me. However everything works as expected when using the environment variables.

Expected behavior

Config to apply

Your installation details

I'm runing kafka ui 0.7.2
I'm running helm chart 0.7.6
Application config irrelevant becasue it's not applying. I've provided the fix and then my config applies and works.

Steps to reproduce

Install with helm chart and then use the yamlApplicationConfig or yamlApplicationConfigConfig

Screenshots

Not applicable

Logs

Not applicable

Additional context

The solution:

In deployment.yaml
Change

         - name: SPRING_CONFIG_ADDITIONAL-LOCATION

To

        - name: SPRING_CONFIG_ADDITIONALLOCATION

Reasoning:
Environment variable support for a dash (-) is dropped in bash. So mappings should not have a dash. In your documentation you point to a tool that does the correct mapping, however you didn't do it in the helm chart.
I believe Alpine Linux doesn't use bash so it's masking the bug and that's why it's probably working for everyone but me :)

Make liveness/readiness probes configurable

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

The liveness and readiness proves are not configurable and hardcoded.

Expected behavior

The probes should be configurable.

Your installation details

Helm chart version: 0.7.5

Steps to reproduce

Screenshots

No response

Logs

No response

Additional context

No response

sasl authentication .

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

how to provide the sasl protocol and authentication at values.yaml file at helm chart level.?

yamlApplicationConfig:
{}

kafka:

clusters:

- name: yaml

bootstrapServers: kafka-service:9092

spring:

security:

oauth2:

auth:

type: disabled

management:

health:

ldap:

enabled: false

getting below error while trying to connect with a kafka cluster which uses sasl authentication

[2023-12-08 08:51:40,401] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /10.224.1.121 (channelId=10.224.1.221:9092-10.224.1.121:56258-290) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

Expected behavior

tor)
[2023-12-08 08:51:40,401] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /10.224.1.121 (channelId=10.224.1.221:9092-10.224.1.121:56258-290) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2023-12-08 08:51:41,206] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /10.224.1.121 (channelId=10.224.1.221:9092-10.224.1.121:56274-290) (Unexpected Kafka request of type METADATA during SASL handshake.)

Your installation details

option to accept ..
sasl_mechanism='PLAIN',
security_protocol='SASL_PLAINTEXT',
sasl_plain_username='username',
sasl_plain_password='password',

Steps to reproduce

used the latest kafka-ui helm chart

Screenshots

[2023-12-08 08:51:40,401] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /10.224.1.121 (channelId=10.224.1.221:9092-10.224.1.121:56258-290) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2023-12-08 08:51:41,206] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /10.224.1.121 (channelId=10.224.1.221:9092-10.224.1.121:56274-290) (Unexpected Kafka request of type METADATA during SASL handshake.)

Logs

[2023-12-08 08:51:40,401] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /10.224.1.121 (channelId=10.224.1.221:9092-10.224.1.121:56258-290) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
[2023-12-08 08:51:41,206] INFO [SocketServer listenerType=BROKER, nodeId=2] Failed authentication with /10.224.1.121 (channelId=10.224.1.221:9092-10.224.1.121:56274-290) (Unexpected Kafka request of type METADATA during SASL handshake.)

Additional context

connecting to existing kafka with sasl authentication

add service.type LoadBalancer

Issue submitter TODO list

  • I've searched for an already existing issues here
  • I'm running a supported version of the application which is listed here and the feature is not present there

Is your proposal related to a problem?

Upon reviewing the current chart(0.7.0), I noticed that it lacks support for the service type LoadBalancer when utilizing a defined load balancer IP. It would be incredibly beneficial if we could incorporate a feature that addresses this limitation. Having the ability to specify a load balancer IP for LoadBalancer services would greatly enhance our deployment options and enable us to fine-tune our networking configuration effortlessly

Describe the feature you're interested in

I am expecting to render something like

apiVersion: v1
kind: Service
metadata:
  name: kafka-ui-ext
spec:
  loadBalancerIP: 172.25.39.235
  ports:
  - name: tcp-ext
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app.kubernetes.io/instance: kafka-ui
  type: LoadBalancer

Describe alternatives you've considered

No response

Version you're running

fdd9ad9

Additional context

No response

Network Policy Ingress no default

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

When the network policy is set to enable ingress filtering, unless I set detailed custom rules no policy is generated.

Expected behavior

If unset the "custom rules" should default to permit access only to the service port. Folks who want to control which hosts and whatnot will still need to write a more detailed policy.

Your installation details

Helm chart 0.7.2

Steps to reproduce

Install with heml setting .Values.networkPolicy.enabled == true

Screenshots

No response

Logs

No response

Additional context

The Kubernetes Dashboard has very nice default filters:
https://github.com/kubernetes/dashboard/blob/master/charts/helm-chart/kubernetes-dashboard/templates/security/networkpolicy.yaml

ConfigMap in version "v1" cannot be handled as a ConfigMap: json: cannot unmarshal bool into Go struct field ConfigMap.data of type string

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

Deploying helm fails creating application with the following error:

 * ConfigMap in version "v1" cannot be handled as a ConfigMap: json: cannot unmarshal bool into Go struct field ConfigMap.data of type string

Expected behavior

Should deploy

Your installation details

  1. 0.7.2
  2. 0.7.2
resource "helm_release" "default" {
  name             = "kafka-ui"
  repository       = "https://provectus.github.io/kafka-ui-charts"
  chart            = "kafka-ui"
  version          = "0.7.2"
  namespace        = "kafka-ui"
  create_namespace = true

  set {
    name  = "envs.config.DYNAMIC_CONFIG_ENABLED"
    value = true
  }

  set {
    name  = "envs.config.AWS_ACCESS_KEY_ID"
    value = "<redacted>"
  }
  set {
    name  = "envs.config.AWS_SECRET_ACCESS_KEY"
    value = "<redacted>"
  }

  set {
    name  = "envs.config.KAFKA_CLUSTERS_0_NAME"
    value = "<redacted>"
  }

  set {
    name  = "envs.config.KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS"
    value = replace("<redacted>", ",", "\\,")
  }

  set {
    name  = "envs.config.KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL"
    value = "SASL_SSL"
  }

  set {
    name  = "envs.config.KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM"
    value = "AWS_MSK_IAM"
  }

  set {
    name  = "envs.config.KAFKA_CLUSTERS_0_PROPERTIES_SASL_CLIENT_CALLBACK_HANDLER_CLASS"
    value = "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
  }

  set {
    name  = "envs.config.KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG"
    value = "software.amazon.msk.auth.iam.IAMLoginModule required;"
  }

}

Steps to reproduce

terraform apply

Screenshots

image

Logs

No response

Additional context

No response

java.lang.IllegalStateException: Error while creating AdminClient for Cluster Default

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

I have been following the documentation to provide the truststore.jks and keystore.jks to be able to connect to the Kaka cluster via SSL https://docs.kafka-ui.provectus.io/configuration/helm-charts/configuration/ssl-example#create-secret exactly the same steps but Kafka-UI is not able to up the application in the pod and I get an error in loop.

Expected behavior

No response

Your installation details

  1. Not UI appearing - Exception error
  2. Helm chart version 0.7.6 / AppVersion 0.7.2
  3. Application config (Helm templates generated related SSL)
# Source: kafka-ui/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
 name: ssl-secret
 namespace: strimzi-ui
type: Opaque
data:
  KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_PASSWORD: xxxxxxxx
  KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_PASSWORD: xxxxxxxx
---
# Source: kafka-ui/templates/configmap_fromValues.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-kafka-ui-fromvalues
  namespace: strimzi-ui
  labels:
    helm.sh/chart: kafka-ui-0.7.6
    app.kubernetes.io/name: kafka-ui
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "v0.7.2"
    app.kubernetes.io/managed-by: Helm
data:
  config.yml: |-
    auth:
      type: disabled
    kafka:
      clusters:
      - bootstrapServers: my-strimzi-kafka-bootstrap.strimzi.svc:9093
        name: strimzi
    management:
      health:
        ldap:
          enabled: false
---
# Source: kafka-ui/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-kafka-ui
  namespace: strimzi-ui
  labels:
    helm.sh/chart: kafka-ui-0.7.6
    app.kubernetes.io/name: kafka-ui
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "v0.7.2"
    app.kubernetes.io/managed-by: Helm
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: kafka-ui
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      annotations:
        checksum/config: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
        checksum/configFromValues: a38cb5f6be79022948da18404f8f9864ad709911bcdd09f63bd23d136bfa2e49
        checksum/secret: 039e6066e12df180e8d5d6f869dfbc264c32e6447ea385105e7f777252231cdf
      labels:
        app.kubernetes.io/name: kafka-ui
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: release-name-kafka-ui
      securityContext:
        {}
      containers:
        - name: kafka-ui
          securityContext:
            {}
          image: docker.io/provectuslabs/kafka-ui:v0.7.2
          imagePullPolicy: IfNotPresent
          env:
            - name: KAFKA_CLUSTERS_0_PROPERTIES_SSL_TRUSTSTORE_LOCATION
              value: /ssl/kafka.truststore.jks
            - name: KAFKA_CLUSTERS_0_PROPERTIES_SSL_KEYSTORE_LOCATION
              value: /ssl/kafka.keystore.jks
            - name: SPRING_CONFIG_ADDITIONAL-LOCATION
              value: /kafka-ui/config.yml
          envFrom:
            - secretRef:
                name: ssl-secret    
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /actuator/health
              port: http
            initialDelaySeconds: 60
            periodSeconds: 30
            timeoutSeconds: 10
          readinessProbe:
            httpGet:
              path: /actuator/health
              port: http
            initialDelaySeconds: 60
            periodSeconds: 30
            timeoutSeconds: 10
          resources:
            {}
          volumeMounts:
            - mountPath: /ssl
              name: config-volume
            - name: kafka-ui-yaml-conf
              mountPath: /kafka-ui/
      volumes:
        - configMap:
            name: ssl-files
          name: config-volume
        - name: kafka-ui-yaml-conf
          configMap: 
            name: release-name-kafka-ui-fromvalues
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: strimzi.io/kind
                operator: In
                values:
                - cluster-operator
            topologyKey: topology.kubernetes.io/zone
  1. No IAAC code

Steps to reproduce

Follow the steps from the documentation (SSL Example section): https://docs.kafka-ui.provectus.io/configuration/helm-charts/configuration/ssl-example#create-secret

Screenshots

image

Logs

2024-04-21 10:03:38,638 DEBUG [parallel-2] c.p.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: Default
2024-04-21 10:03:38,639 ERROR [parallel-2] c.p.k.u.s.StatisticsService: Failed to collect cluster Default info
java.lang.IllegalStateException: Error while creating AdminClient for Cluster Default
        at com.provectus.kafka.ui.service.AdminClientServiceImpl.lambda$createAdminClient$5(AdminClientServiceImpl.java:56)
        at reactor.core.publisher.Mono.lambda$onErrorMap$28(Mono.java:3783)
        at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
        at reactor.core.publisher.Operators.error(Operators.java:198)
        at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:135)
        at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
        at reactor.core.publisher.Mono.subscribe(Mono.java:4480)
        at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onComplete(FluxSwitchIfEmpty.java:82)
        at reactor.core.publisher.Operators.complete(Operators.java:137)
        at reactor.core.publisher.MonoEmpty.subscribe(MonoEmpty.java:46)
        at reactor.core.publisher.Mono.subscribe(Mono.java:4495)
        at reactor.core.publisher.FluxFlatMap$FlatMapMain.onNext(FluxFlatMap.java:427)
        at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.runAsync(FluxPublishOn.java:440)
        at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.run(FluxPublishOn.java:527)
        at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)
        at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.NullPointerException: null
        at java.base/java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011)
        at java.base/java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006)
        at java.base/java.util.Properties.put(Properties.java:1301)
        at com.provectus.kafka.ui.service.AdminClientServiceImpl.lambda$createAdminClient$2(AdminClientServiceImpl.java:47)
        at reactor.core.publisher.MonoSupplier.call(MonoSupplier.java:67)
        at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:127)
        ... 16 common frames omitted

Additional context

No response

kafka-ui repository 404

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

Update repository

helm repo up kafka-ui

Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "kafka-ui" chart repository (https://provectus.github.io/kafka-ui):
        failed to fetch https://provectus.github.io/kafka-ui/index.yaml : 404 Not Found
Update Complete. ⎈Happy Helming!⎈

Try to install

helm upgrade --install kafka-ui kafka-ui/kafka-ui

Release "kafka-ui" does not exist. Installing it now.
Error: failed to download "kafka-ui/kafka-ui"

Trying to re-add the repository

helm repo add kafka-ui2  https://provectus.github.io/kafka-ui

Error: looks like "https://provectus.github.io/kafka-ui" is not a valid chart repository or cannot be reached: failed to fetch https://provectus.github.io/kafka-ui/index.yaml : 404 Not Found

Expected behavior

Repository index.yaml should exist.

Your installation details

latest

Steps to reproduce

helm repo add kafka-ui2  https://provectus.github.io/kafka-ui

Screenshots

No response

Logs

No response

Additional context

No response

not work with apache kafka 2.1.1 version and kafka-ui 0.5.0

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

2023-08-18 07:31:52,921 INFO [parallel-4] o.a.k.c.u.AppInfoParser: Kafka version: 3.3.1
2023-08-18 07:31:52,921 INFO [parallel-4] o.a.k.c.u.AppInfoParser: Kafka commitId: e23c59d00e687ff5
2023-08-18 07:31:52,921 INFO [parallel-4] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1692343912921
2023-08-18 07:31:52,927 DEBUG [parallel-1] c.p.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: beta-bigdata-influence-kafka
2023-08-18 07:31:52,928 ERROR [parallel-2] c.p.k.u.s.StatisticsService: Failed to collect cluster sh-beta-infra-business-kafka info
java.lang.IllegalStateException: Error while creating AdminClient for Cluster sh-beta-infra-business-kafka
at com.provectus.kafka.ui.service.AdminClientServiceImpl.lambda$createAdminClient$3(AdminClientServiceImpl.java:45)
at reactor.core.publisher.Mono.lambda$onErrorMap$31(Mono.java:3776)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
at reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onError(FluxHide.java:142)
at reactor.core.publisher.FluxMap$MapSubscriber.onError(FluxMap.java:134)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onError(Operators.java:2063)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)
at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.onError(MonoIgnoreThen.java:278)
at reactor.core.publisher.MonoPublishOn$PublishOnSubscriber.run(MonoPublishOn.java:187)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)

Expected behavior

No response

Your installation details

install by helm chart

kafka:
clusters:
- name: beta-infra-business-kafka
bootstrapServers: *****.kafka.us-east-1.amazonaws.com:9092
- name: sh-beta-infra-business-kafka
bootstrapServers: 10.83.43.46:9092,10.83.42.41:9092,10.83.41.194:9092

something wrong with sh-beta-infra-business-kafka (kafka version 2.1.1)
but kafka of aws msk is work ok(msk version : 2.5.1)

Steps to reproduce

1.install kafka-ui by helm chat
2.config the kafka of version 2.1.1

Screenshots

No response

Logs

No response

Additional context

No response

cdk8s support

  1. Convert existing helm chart to cdk8s
  2. Add workflow to publish as npm package
  3. Publish in construct hub

Helm: Support PDB and Topology Spread Constraints

Issue submitter TODO list

  • I've searched for an already existing issues here
  • I'm running a supported version of the application which is listed here and the feature is not present there

Is your proposal related to a problem?

No response

Describe the feature you're interested in

We want to enhance the availability, reliability, and resiliency of the Kafka-UI, especially during updates and unexpected situations. We would like to be able to set Pod Disruption Budget and Topology Spread Constraints in the Helm chart default values.

Describe alternatives you've considered

No response

Version you're running

fdd9ad9 - charts/kafka-ui-0.7.0

Additional context

No response

Update api version for HPA

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

Release kafka-ui-dev uses obsolete version of api autoscaling/v2beta1 for object HorizontalPodAutoscaler, autoscaling/v2beta1 will be removed in v1.25.0, you need to replace with autoscaling/v2

Expected behavior

Warning does not appear

Your installation details


apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kafka
namespace: kafka
spec:
releaseName: kafka-ui-dev
interval: 1m
timeout: 10m
chart:
spec:
chart: kafka-ui
version: "0.7.1"
sourceRef:
kind: HelmRepository
name: provectus
namespace: fluxcd
interval: 1m
values:
existingSecret: kafka-ui-clientsecret
image:
tag: "9549f68d7edcb0022687c8155010ba3c5b2cddac"
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 80
yamlApplicationConfig:
kafka:
clusters:
- name: dev
bootstrapServers: kafka-test.kafka.svc.cluster.local:9092
- name: stage
bootstrapServers: kafka-stage.kafka.svc.cluster.local:9092
- name: prod
bootstrapServers: kafka-main.kafka.svc.cluster.local:9092
auth:
type: OAUTH2
oauth2:
client:
google:
provider: google
clientSecret: ${clientSecret}
clientId: ${clientId}
user-name-attribute: email
custom-params:
type: google
allowedDomain:
management:
health:
ldap:
enabled: false
rbac:
roles:
- name: "readonly"
clusters:
- dev
- stage
- prod
subjects:
- provider: oauth_google
type: domain
value:
permissions:
- resource: clusterconfig
actions: ["view"]

          - resource: topic
            value: ".*"
            actions:
              - VIEW
              - MESSAGES_READ

          - resource: consumer
            value: ".*"
            actions: [view]

          - resource: schema
            value: ".*"
            actions: [view]

          - resource: connect
            value: ".*"
            actions: [view]

          - resource: acl
            actions: [view]
      - name: "admins"
        clusters:
          - dev
          - stage
          - prod
        subjects:
          - provider: oauth_google
            type: user
            value: ""
          - provider: oauth_google
            type: user
            value: ""
        permissions:
          - resource: applicationconfig
            actions: all

          - resource: clusterconfig
            actions: all

          - resource: topic
            value: ".*"
            actions: all

          - resource: consumer
            value: ".*"
            actions: all

          - resource: schema
            value: ".*"
            actions: all

          - resource: connect
            value: ".*"
            actions: all

          - resource: ksql
            actions: all

          - resource: acl
            actions: [view]
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      0.0.0.0/0;
      deny all;
  host: kafka-ui.dev
  tls:
    enabled: true
    secretName: kafka-ui.dev
service:
  type: ClusterIP
resources:
  requests:
    cpu: 200m
    memory: 500Mi
  limits:
    memory: 500Mi

Steps to reproduce

just apply hr on gke platform

Screenshots

No response

Logs

No response

Additional context

No response

cannot unmarshal bool into Go struct field EnvVar.spec.template.spec.containers.env.value of type string

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

env can not convert string

Expected behavior

install failed

Your installation details

$ helm install kafka-ui charts/kafka-ui -n kafka-ui --create-namespace \
--set service.type=NodePort \
--set yamlApplicationConfig.kafka.clusters[0].name=yaml \
--set yamlApplicationConfig.kafka.clusters[0].bootstrapServers=kafka.kafka:9092 \
--set env[0].name=DYNAMIC_CONFIG_ENABLED \
--set env[0].value="true"
Error: INSTALLATION FAILED: 1 error occurred:
        * Deployment in version "v1" cannot be handled as a Deployment: json: cannot unmarshal bool into Go struct field EnvVar.spec.template.spec.containers.env.value of type string

Steps to reproduce

helm install kafka-ui charts/kafka-ui -n kafka-ui --create-namespace \
--set service.type=NodePort \
--set yamlApplicationConfig.kafka.clusters[0].name=yaml \
--set yamlApplicationConfig.kafka.clusters[0].bootstrapServers=kafka.kafka:9092 \
--set env[0].name=DYNAMIC_CONFIG_ENABLED \
--set env[0].value="true"

but this work

helm install kafka-ui charts/kafka-ui -n kafka-ui --create-namespace \
--set service.type=NodePort \
--set yamlApplicationConfig.kafka.clusters[0].name=yaml \
--set yamlApplicationConfig.kafka.clusters[0].bootstrapServers=kafka.kafka:9092 \
-f kafka-ui-values.yaml
$ cat kafka-ui-values.yaml
env:
- name: DYNAMIC_CONFIG_ENABLED 
  value: "true"

is this should with quota

https://github.com/provectus/kafka-ui-charts/blob/6996eabf8be92a06870695087f09b76b95f56790/charts/kafka-ui/templates/deployment.yaml#L52C4-L55C23

Screenshots

No response

Logs

No response

Additional context

No response

Implementing Glue Schema Registry using Helm

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

Hey guys,

I have an already running Kafka UI within an EKS cluster and would like to implement Glue Schema Registry in the same Helm deployment. Is this supported? I saw that there's some documentation using docker compose but couldnt find anything related to Helm.

Thanks.

Expected behavior

No response

Your installation details

Steps to reproduce

Screenshots

No response

Logs

No response

Additional context

No response

UnknownHostException for resolve bootstrapServers.

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

Got exception UnknownHostException for host from bootstrapServers field.
But in container host resolved correct.

Expected behavior

Resolve host from bootstrapServers.

Your installation details

  1. 56fa824
  2. version: 0.7.5, appVersion: v0.7.1
  3. config.yaml:
auth:
  type: disabled
kafka:
  clusters:
  - bootstrapServers: kafka-node3:9092
    name: prod-k8s-htz
  - bootstrapServers: infra-cluster-kafka-bootstrap:9092
    name: infra-kafka-logs
management:
  health:
    ldap:
      enabled: false

Steps to reproduce

I add to config two clusters.
First is external host with kafka.
Second is kafka in k8s.
Can resolve hosts in container:

/ $ cat /etc/resolv.conf 
search infra.svc.cluster.local svc.cluster.local cluster.local example.com
nameserver 10.96.0.10
options ndots:5

/ $ getent hosts kafka-node3
10.70.1.23        kafka-node3.example.com  kafka-node3.example.com kafka-node3

/ $ getent hosts infra-cluster-kafka-bootstrap
10.100.237.17     infra-cluster-kafka-bootstrap.infra.svc.cluster.local  infra-cluster-kafka-bootstrap.infra.svc.cluster.local infra-cluster-kafka-bootstrap
/ $ 

But in log i see for external server:

2024-02-06 22:40:59,380 WARN  [kafka-admin-client-thread | kafka-ui-admin-1707259237-3] o.a.k.c.NetworkClient: [AdminClient clientId=kafka-ui-admin-1707259237-3] Error connecting to node kafka-node3.example.com:9092 (id: 5 rack: null)
java.net.UnknownHostException: kafka-node3.example.com: Name does not resolve
	at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
	at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:933)
	at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1534)
	at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:852)
	at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1524)
	at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1381)
	at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1305)
	at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27)
	at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:510)
	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:467)
	at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:173)
	at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:990)
	at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:301)
	at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:1143)
	at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1403)
	at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1346)
	at java.base/java.lang.Thread.run(Thread.java:833)

Looks like bug.

I can use as workaround two dedicated kafka-iu:

  1. for external kafka with ndots:1 and fqdn in bootstrapServers.
  2. for kafka in k8s with default ndots: 5

But it is strange that in container resolver i can get ip for hosts but application show error.

Screenshots

No response

Logs

No response

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.