Giter Club home page Giter Club logo

strimzi-lab's Introduction

Strimzi

Run Apache Kafka on Kubernetes and OpenShift

Build Status GitHub release License Twitter Follow Artifact Hub

Strimzi provides a way to run an Apache Kafka® cluster on Kubernetes or OpenShift in various deployment configurations. See our website for more details about the project.

Quick Starts

To get up and running quickly, check our Quick Start for Minikube, OKD (OpenShift Origin) and Kubernetes Kind.

Documentation

Documentation for the current main branch as well as all releases can be found on our website.

Roadmap

The roadmap of the Strimzi Operator project is maintained as GitHub Project.

Getting help

If you encounter any issues while using Strimzi, you can get help using:

Strimzi Community Meetings

You can join our regular community meetings:

Resources:

Contributing

You can contribute by:

  • Raising any issues you find using Strimzi
  • Fixing issues by opening Pull Requests
  • Improving documentation
  • Talking about Strimzi

All bugs, tasks or enhancements are tracked as GitHub issues. Issues which might be a good start for new contributors are marked with "good-start" label.

The Dev guide describes how to build Strimzi. Before submitting a patch, please make sure to understand, how to test your changes before opening a PR Test guide.

The Documentation Contributor Guide describes how to contribute to Strimzi documentation.

If you want to get in touch with us first before contributing, you can use:

License

Strimzi is licensed under the Apache License, Version 2.0

Container signatures

From the 0.38.0 release, Strimzi containers are signed using the cosign tool. Strimzi currently does not use the keyless signing and the transparency log. To verify the container, you can copy the following public key into a file:

-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAET3OleLR7h0JqatY2KkECXhA9ZAkC
TRnbE23Wb5AzJPnpevvQ1QUEQQ5h/I4GobB7/jkGfqYkt6Ct5WOU2cc6HQ==
-----END PUBLIC KEY-----

And use it to verify the signature:

cosign verify --key strimzi.pub quay.io/strimzi/operator:latest --insecure-ignore-tlog=true

Software Bill of Materials (SBOM)

From the 0.38.0 release, Strimzi publishes the software bill of materials (SBOM) of our containers. The SBOMs are published as an archive with SPDX-JSON and Syft-Table formats signed using cosign. For releases, they are also pushed into the container registry. To verify the SBOM signatures, please use the Strimzi public key:

-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAET3OleLR7h0JqatY2KkECXhA9ZAkC
TRnbE23Wb5AzJPnpevvQ1QUEQQ5h/I4GobB7/jkGfqYkt6Ct5WOU2cc6HQ==
-----END PUBLIC KEY-----

You can use it to verify the signature of the SBOM files with the following command:

cosign verify-blob --key cosign.pub --bundle <SBOM-file>.bundle --insecure-ignore-tlog=true <SBOM-file>

Strimzi is a Cloud Native Computing Foundation incubating project.

CNCF ><

strimzi-lab's People

Contributors

gunnarmorling avatar jpechane avatar matzew avatar mbogoevici avatar ppatierno avatar scholzj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

strimzi-lab's Issues

Cluster and topics handling examples

In the module 1 we should add some simple examples about :

  • changing cluster configuration (i.e. adding a new node, changing a Kafka broker parameter, ...)
  • create and update a topic

Issue with consumer-app and the dashboard does not load properly

The dashboard does not load and attached screenshot
Back-off | Back-off restarting failed container18 times in the last

-- | --

the logs of the pod looks like below

  2019-08-09 14:09:43 INFO ConsumerConfig:223 - ConsumerConfig values:
  auto.commit.interval.ms = 5000
  auto.offset.reset = earliest
  bootstrap.servers = [my-cluster-kafka-bootstrap:9092]
  check.crcs = true
  client.id =
  connections.max.idle.ms = 540000
  enable.auto.commit = true
  exclude.internal.topics = true
  fetch.max.bytes = 52428800
  fetch.max.wait.ms = 500
  fetch.min.bytes = 1
  group.id = consumer-app
  heartbeat.interval.ms = 3000
  interceptor.classes = null
  internal.leave.group.on.close = true
  isolation.level = read_uncommitted
  key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
  max.partition.fetch.bytes = 1048576
  max.poll.interval.ms = 300000
  max.poll.records = 500
  metadata.max.age.ms = 300000
  metric.reporters = []
  metrics.num.samples = 2
  metrics.recording.level = INFO
  metrics.sample.window.ms = 30000
  partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
  receive.buffer.bytes = 65536
  reconnect.backoff.max.ms = 1000
  reconnect.backoff.ms = 50
  request.timeout.ms = 305000
  retry.backoff.ms = 100
  sasl.jaas.config = null
  sasl.kerberos.kinit.cmd = /usr/bin/kinit
  sasl.kerberos.min.time.before.relogin = 60000
  sasl.kerberos.service.name = null
  sasl.kerberos.ticket.renew.jitter = 0.05
  sasl.kerberos.ticket.renew.window.factor = 0.8
  sasl.mechanism = GSSAPI
  security.protocol = PLAINTEXT
  send.buffer.bytes = 131072
  session.timeout.ms = 10000
  ssl.cipher.suites = null
  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
  ssl.endpoint.identification.algorithm = null
  ssl.key.password = null
  ssl.keymanager.algorithm = SunX509
  ssl.keystore.location = null
  ssl.keystore.password = null
  ssl.keystore.type = JKS
  ssl.protocol = TLS
  ssl.provider = null
  ssl.secure.random.implementation = null
  ssl.trustmanager.algorithm = PKIX
  ssl.truststore.location = null
  ssl.truststore.password = null
  ssl.truststore.type = JKS
  value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
   
  Aug 09, 2019 2:09:45 PM io.vertx.core.impl.BlockedThreadChecker
  WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 2895 ms, time limit is 2000
  Aug 09, 2019 2:09:46 PM io.vertx.core.impl.BlockedThreadChecker
  WARNING: Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 3895 ms, time limit is 2000

2019-08-09 14:10:37 INFO Consumer:80 - Received on topic=iot-temperature-max, partition=0, offset=1370, key=d-f5baf508-db86-49cb-8f3f-7d3bca454386, value=22

  | 2019-08-09 14:10:38 INFO Consumer:80 - Received on topic=iot-temperature-max, partition=0, offset=1371, key=d-f5baf508-db86-49cb-8f3f-7d3bca454386, value=22
  | 2019-08-09 14:10:39 INFO Consumer:80 - Received on topic=iot-temperature-max, partition=0, offset=1372, key=d-f5baf508-db86-49cb-8f3f-7d3bca454386, value=22

Build stream API app Docker image with RocksDB library

With the current base openjdk:8-jre-alpine image for building the stream API application, the RockDB library is not available and the following exception is thrown while running the container on OpenShift :

Exception in thread "streams-temperature-f4cfb4e2-65f9-4bef-a952-8b117fa5f08a-StreamThread-1" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni783529562499980046.so: Error loading shared library libstdc++.so.6: No such file or directory (needed by /tmp/librocksdbjni783529562499980046.so)
	at java.lang.ClassLoader$NativeLibrary.load(Native Method)
	at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
	at java.lang.Runtime.load0(Runtime.java:809)
	at java.lang.System.load(System.java:1086)
	at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
	at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
	at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
	at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
	at org.rocksdb.Options.<clinit>(Options.java:25)
	at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:128)
	at org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:40)
	at org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:89)
	at org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:81)
	at org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:43)
	at org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:34)
	at org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:67)
	at org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:33)
	at org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:100)
	at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:141)
	at org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:232)
	at org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:245)
	at org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:153)
	at org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:157)
	at org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:36)
	at org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:96)
	at org.apache.kafka.streams.kstream.internals.KStreamWindowReduce$KStreamWindowReduceProcessor.process(KStreamWindowReduce.java:118)
	at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
	at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
	at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)
	at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
	at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216)
	at org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:403)
	at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:317)
	at org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:942)
	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:822)
	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)

Support for SSL and SCARM

Though it's old but nice one. Seems it's only supported plain (9092), would be great if it's support SSL and SCARM.

THANKS 👍 in advance.

Add CRUD application running on MySQL as source for CDC events

I have an example CRUD application based on MySQL and running on WildFly which I use in demos.

We could use this as a source for ingesting CDC events with Debezium during the lab. Currently, it's a "Hike Manager" application, but it's simple CRUD really only, so we might quite easily alter it into something more related to the IOT scenario. Maybe something for managing device master data?

We could:

  • Create a Docker image with the CRUD app, e.g. based on the JBoss WildFly base image
  • Create a MySQL image with the correct binlog config (we even might use the Debezium example image for MySQL for that)
  • Let lab participants
    • set up instances of these images,
    • set up Kafka Connect with the Debezium MySql connector
    • deploy an instance of the connector, pointing to the database set up before
    • consume messages with the console consumer, seeing changes there as they alter data in the CRUD app

That's the most simple demo I'm usually doing of Debezium. Then, depending on how far we want to go, we could let the participants:

  • Set up another DB instance, e.g. Postgres
  • Set up an instance of the JDBC sink connector, pointing to the CDC topic set up before, and sinking events into Postgres
  • explore how the app keeps running also if KC is down and how changes will show up in Postgres once it's up again

and the last step I'm doing in my Debezium demos is to show how to consume CDC messages using WildFly Swarm and CDI and push them to another browser window using WebSockets. This might be too much for this lab, tough, and lead away too much from the things we want to show.

@mbogoevici, @ppatierno, any thoughts on that? I think i'll prepare a PR with the CRUD app to be added to this repo in any case, as it seems a good first step.

Add S2I support for the iot-demo

As proposed by @mbogoevici it could be useful having the S2I support in the iot-demo in order to build the device-app and stream-app Docker images in the OpenShift cluster directly from the GitHub repo.

AMQP KAFKA Connector

I am planning to pull the data from Hono using Strimzi kafka(Version 0.20.0) connector (Camel AMQP source connector). I have followed the below steps to read data from Hono.

I downloaded Camel-amqp-kafka-connector, JMS Jar files from below link:

https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-amqp-kafka-connector/0.7.0/camel-amqp-kafka-connector-0.7.0-package.tar.gz
https://downloads.apache.org/qpid/jms/0.51.0/apache-qpid-jms-0.51.0-bin.tar.gz

After downloaded the above tar and unzipped and created docker image file using below command

cat <Dockerfile
FROM strimzi/kafka:0.20.1-kafka-2.6.0
USER root:root
RUN mkdir -p /opt/kafka/plugins/camel
COPY ./camel-activemq-kafka-connector/* /opt/kafka/plugins/camel/
USER 1001
EOF

Docker build -f ./Dockerfile -t localhost:5000/my-connector-amqp_new .
docker push localhost:5000/my-connector-amqp_new

using below command i have created kafkaConnect( here used my local image created above)

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
name: my-connect-cluster
annotations:
strimzi.io/use-connector-resources: "true"
spec:
image: 10.128.0.6:5000/my-connector-amqp_new
replicas: 1
bootstrapServers: my-cluster-kafka-bootstrap:9092
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
config.storage.replication.factor: 1
offset.storage.replication.factor: 1
status.storage.replication.factor: 1

**I was referring the below link to create kafkaConnector properties:

https://github.com/apache/camel-kafka-connector/blob/master/examples/CamelAmqpSourceConnector.properties**

Values are in this file as mentioned it below:

name=CamelAmqpSourceConnector
topics=mytopic
tasks.max=1
connector.class=org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceConnector

camel.source.path.destinationType=queue
camel.source.path.destinationName=test-queue

camel.component.amqp.includeAmqpAnnotations=true
camel.component.amqp.connectionFactory=#class:org.apache.qpid.jms.JmsConnectionFactory
camel.component.amqp.connectionFactory.remoteURI=amqp://localhost:5672
camel.component.amqp.username=admin
camel.component.amqp.password=admin
camel.component.amqp.testConnectionOnStartup=true

I am using the below configurations. Could you suggest me what is the correct value for this property?

camel.component.amqp.connectionFactory

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
name: camelamqpsourceconnector
labels:
strimzi.io/cluster: my-connect-cluster-new
spec:
class: org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceConnector
tasksMax: 1
config:
camel.component.amqp.includeAmqpAnnotations: true
camel.component.amqp.connectionFactory: org.apache.qpid.jms.JmsConnectionFactory
camel.component.amqp.connectionFactory.remoteURI: amqp://$( kubectl get service eclipse-hono-dispatch-router-ext --output="jsonpath={.spec.clusterIP}" -n hono):15671
camel.component.amqp.username: consumer@HONO
camel.component.amqp.password: verysecret
camel.component.amqp.testConnectionOnStartup: true
camel.source.kafka.topic: mytopic
camel.source.path.destinationType: queue
camel.source.path.destinationName: test-queue

I am getting below error messages:

Could you help me, what values we need to give for the below properties:

camel.component.amqp.connectionFactory: org.apache.qpid.jms.JmsConnectionFactory
camel.component.amqp.connectionFactory.remoteURI: amqp://$( kubectl get service eclipse-hono-dispatch-router-ext --output="jsonpath={.spec.clusterIP}" -n hono):15671

Error message:

2020-12-26 12:50:54,474 INFO Creating task camelamqpsourceconnector-0 (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,479 INFO ConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = camelamqpsourceconnector
predicates = []
tasks.max = 1
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.ConnectorConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,480 INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = camelamqpsourceconnector
predicates = []
tasks.max = 1
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,481 INFO TaskConfig values:
task.class = class org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceTask
(org.apache.kafka.connect.runtime.TaskConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,481 INFO Instantiated task camelamqpsourceconnector-0 with version 0.7.0 of type org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceTask (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,482 INFO JsonConverterConfig values:
converter.type = key
decimal.format = BASE64
schemas.cache.size = 1000
schemas.enable = true
(org.apache.kafka.connect.json.JsonConverterConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,483 INFO Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task camelamqpsourceconnector-0 using the worker config (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,485 INFO JsonConverterConfig values:
converter.type = value
decimal.format = BASE64
schemas.cache.size = 1000
schemas.enable = true
(org.apache.kafka.connect.json.JsonConverterConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,486 INFO Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task camelamqpsourceconnector-0 using the worker config (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,486 INFO Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task camelamqpsourceconnector-0 using the worker config (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,492 INFO SourceConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = camelamqpsourceconnector
predicates = []
tasks.max = 1
topic.creation.groups = []
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.SourceConnectorConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,493 INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = null
name = camelamqpsourceconnector
predicates = []
tasks.max = 1
topic.creation.groups = []
transforms = []
value.converter = null
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,497 INFO Initializing: org.apache.kafka.connect.runtime.TransformationChain{} (org.apache.kafka.connect.runtime.Worker) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,500 INFO ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [my-cluster-kafka-bootstrap:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = connector-producer-camelamqpsourceconnector-0
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 2147483647
enable.idempotence = false
interceptor.classes = []
internal.auto.downgrade.txn.commit = false
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 9223372036854775807
max.in.flight.requests.per.connection = 1
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 2147483647
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,517 WARN The configuration 'metrics.context.connect.group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,518 WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,518 INFO Kafka version: 2.6.0 (org.apache.kafka.common.utils.AppInfoParser) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,518 INFO Kafka commitId: 62abe01bee039651 (org.apache.kafka.common.utils.AppInfoParser) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,518 INFO Kafka startTimeMs: 1608987054518 (org.apache.kafka.common.utils.AppInfoParser) [StartAndStopExecutor-connect-1-2]
2020-12-26 12:50:54,543 INFO [Worker clientId=connect-1, groupId=connect-cluster-new] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder) [DistributedHerder-connect-1-1]
2020-12-26 12:50:54,600 INFO Starting CamelSourceTask connector task (org.apache.camel.kafkaconnector.CamelSourceTask) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:54,655 INFO [Producer clientId=connector-producer-camelamqpsourceconnector-0] Cluster ID: AxvFw7iiSDCEUx-RwUy0gw (org.apache.kafka.clients.Metadata) [kafka-producer-network-thread | connector-producer-camelamqpsourceconnector-0]
2020-12-26 12:50:54,660 INFO CamelAmqpSourceConnectorConfig values:
camel.aggregation.size = 10
camel.aggregation.timeout = 500
camel.beans.aggregate = null
camel.component.amqp.acceptMessagesWhileStopping = false
camel.component.amqp.acknowledgementModeName = AUTO_ACKNOWLEDGE
camel.component.amqp.allowAutoWiredConnectionFactory = true
camel.component.amqp.allowAutoWiredDestinationResolver = true
camel.component.amqp.allowReplyManagerQuickStop = false
camel.component.amqp.allowSerializedHeaders = false
camel.component.amqp.artemisStreamingEnabled = true
camel.component.amqp.asyncConsumer = false
camel.component.amqp.asyncStartListener = false
camel.component.amqp.asyncStopListener = false
camel.component.amqp.autoStartup = true
camel.component.amqp.autowiredEnabled = true
camel.component.amqp.cacheLevel = null
camel.component.amqp.cacheLevelName = CACHE_AUTO
camel.component.amqp.clientId = null
camel.component.amqp.concurrentConsumers = 1
camel.component.amqp.configuration = null
camel.component.amqp.connectionFactory = org.apache.qpid.jms.JmsConnectionFactory
camel.component.amqp.consumerType = Default
camel.component.amqp.defaultTaskExecutorType = null
camel.component.amqp.destinationResolver = null
camel.component.amqp.disableReplyTo = false
camel.component.amqp.durableSubscriptionName = null
camel.component.amqp.eagerLoadingOfProperties = false
camel.component.amqp.eagerPoisonBody = Poison JMS message due to ${exception.message}
camel.component.amqp.errorHandler = null
camel.component.amqp.errorHandlerLogStackTrace = true
camel.component.amqp.errorHandlerLoggingLevel = WARN
camel.component.amqp.exceptionListener = null
camel.component.amqp.exposeListenerSession = false
camel.component.amqp.headerFilterStrategy = null
camel.component.amqp.idleConsumerLimit = 1
camel.component.amqp.idleTaskExecutionLimit = 1
camel.component.amqp.includeAllJMSXProperties = false
camel.component.amqp.includeAmqpAnnotations = true
camel.component.amqp.jmsKeyFormatStrategy = null
camel.component.amqp.jmsMessageType = null
camel.component.amqp.lazyCreateTransactionManager = true
camel.component.amqp.mapJmsMessage = true
camel.component.amqp.maxConcurrentConsumers = null
camel.component.amqp.maxMessagesPerTask = -1
camel.component.amqp.messageConverter = null
camel.component.amqp.messageCreatedStrategy = null
camel.component.amqp.messageIdEnabled = true
camel.component.amqp.messageListenerContainerFactory = null
camel.component.amqp.messageTimestampEnabled = true
camel.component.amqp.password = verysecret
camel.component.amqp.pubSubNoLocal = false
camel.component.amqp.queueBrowseStrategy = null
camel.component.amqp.receiveTimeout = 1000
camel.component.amqp.recoveryInterval = 5000
camel.component.amqp.replyTo = null
camel.component.amqp.replyToDeliveryPersistent = true
camel.component.amqp.replyToSameDestinationAllowed = false
camel.component.amqp.requestTimeoutCheckerInterval = 1000
camel.component.amqp.selector = null
camel.component.amqp.subscriptionDurable = false
camel.component.amqp.subscriptionName = null
camel.component.amqp.subscriptionShared = false
camel.component.amqp.taskExecutor = null
camel.component.amqp.testConnectionOnStartup = true
camel.component.amqp.transacted = false
camel.component.amqp.transactedInOut = false
camel.component.amqp.transactionManager = null
camel.component.amqp.transactionName = null
camel.component.amqp.transactionTimeout = -1
camel.component.amqp.transferException = false
camel.component.amqp.transferExchange = false
camel.component.amqp.useMessageIDAsCorrelationID = false
camel.component.amqp.username = consumer@HONO
camel.component.amqp.waitForProvisionCorrelationToBeUpdatedCounter = 50
camel.component.amqp.waitForProvisionCorrelationToBeUpdatedThreadSleepingTime = 100
camel.error.handler = default
camel.error.handler.max.redeliveries = 0
camel.error.handler.redelivery.delay = 1000
camel.idempotency.enabled = false
camel.idempotency.expression.header = null
camel.idempotency.expression.type = body
camel.idempotency.kafka.bootstrap.servers = localhost:9092
camel.idempotency.kafka.max.cache.size = 1000
camel.idempotency.kafka.poll.duration.ms = 100
camel.idempotency.kafka.topic = kafka_idempotent_repository
camel.idempotency.memory.dimension = 100
camel.idempotency.repository.type = memory
camel.remove.headers.pattern =
camel.source.camelMessageHeaderKey = null
camel.source.component = amqp
camel.source.contentLogLevel = OFF
camel.source.endpoint.acceptMessagesWhileStopping = false
camel.source.endpoint.acknowledgementModeName = AUTO_ACKNOWLEDGE
camel.source.endpoint.allowReplyManagerQuickStop = false
camel.source.endpoint.allowSerializedHeaders = false
camel.source.endpoint.artemisStreamingEnabled = true
camel.source.endpoint.asyncConsumer = false
camel.source.endpoint.asyncStartListener = false
camel.source.endpoint.asyncStopListener = false
camel.source.endpoint.autoStartup = true
camel.source.endpoint.cacheLevel = null
camel.source.endpoint.cacheLevelName = CACHE_AUTO
camel.source.endpoint.clientId = null
camel.source.endpoint.concurrentConsumers = 1
camel.source.endpoint.connectionFactory = null
camel.source.endpoint.consumerType = Default
camel.source.endpoint.defaultTaskExecutorType = null
camel.source.endpoint.destinationResolver = null
camel.source.endpoint.disableReplyTo = false
camel.source.endpoint.durableSubscriptionName = null
camel.source.endpoint.eagerLoadingOfProperties = false
camel.source.endpoint.eagerPoisonBody = Poison JMS message due to ${exception.message}
camel.source.endpoint.errorHandler = null
camel.source.endpoint.errorHandlerLogStackTrace = true
camel.source.endpoint.errorHandlerLoggingLevel = WARN
camel.source.endpoint.exceptionHandler = null
camel.source.endpoint.exceptionListener = null
camel.source.endpoint.exchangePattern = null
camel.source.endpoint.exposeListenerSession = false
camel.source.endpoint.headerFilterStrategy = null
camel.source.endpoint.idleConsumerLimit = 1
camel.source.endpoint.idleTaskExecutionLimit = 1
camel.source.endpoint.includeAllJMSXProperties = false
camel.source.endpoint.jmsKeyFormatStrategy = null
camel.source.endpoint.jmsMessageType = null
camel.source.endpoint.lazyCreateTransactionManager = true
camel.source.endpoint.mapJmsMessage = true
camel.source.endpoint.maxConcurrentConsumers = null
camel.source.endpoint.maxMessagesPerTask = -1
camel.source.endpoint.messageConverter = null
camel.source.endpoint.messageCreatedStrategy = null
camel.source.endpoint.messageIdEnabled = true
camel.source.endpoint.messageListenerContainerFactory = null
camel.source.endpoint.messageTimestampEnabled = true
camel.source.endpoint.password = null
camel.source.endpoint.pubSubNoLocal = false
camel.source.endpoint.receiveTimeout = 1000
camel.source.endpoint.recoveryInterval = 5000
camel.source.endpoint.replyTo = null
camel.source.endpoint.replyToDeliveryPersistent = true
camel.source.endpoint.replyToSameDestinationAllowed = false
camel.source.endpoint.requestTimeoutCheckerInterval = 1000
camel.source.endpoint.selector = null
camel.source.endpoint.subscriptionDurable = false
camel.source.endpoint.subscriptionName = null
camel.source.endpoint.subscriptionShared = false
camel.source.endpoint.synchronous = false
camel.source.endpoint.taskExecutor = null
camel.source.endpoint.testConnectionOnStartup = false
camel.source.endpoint.transacted = false
camel.source.endpoint.transactedInOut = false
camel.source.endpoint.transactionManager = null
camel.source.endpoint.transactionName = null
camel.source.endpoint.transactionTimeout = -1
camel.source.endpoint.transferException = false
camel.source.endpoint.transferExchange = false
camel.source.endpoint.useMessageIDAsCorrelationID = false
camel.source.endpoint.username = null
camel.source.endpoint.waitForProvisionCorrelationToBeUpdatedCounter = 50
camel.source.endpoint.waitForProvisionCorrelationToBeUpdatedThreadSleepingTime = 100
camel.source.marshal = null
camel.source.maxBatchPollSize = 1000
camel.source.maxPollDuration = 1000
camel.source.path.destinationName = test-queue
camel.source.path.destinationType = queue
camel.source.pollingConsumerBlockTimeout = 0
camel.source.pollingConsumerBlockWhenFull = true
camel.source.pollingConsumerQueueSize = 1000
camel.source.unmarshal = null
camel.source.url = null
topics = test
(org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceConnectorConfig) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:54,905 INFO Setting initial properties in Camel context: [{camel.source.path.destinationName=test-queue, connector.class=org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceConnector, camel.component.amqp.username=consumer@HONO, tasks.max=1, camel.component.amqp.connectionFactory=org.apache.qpid.jms.JmsConnectionFactory, camel.source.component=amqp, camel.component.amqp.testConnectionOnStartup=true, camel.component.amqp.connectionFactory.remoteURI=amqp://$( kubectl get service eclipse-hono-dispatch-router-ext --output="jsonpath={.spec.clusterIP}" -n hono):15671, camel.component.amqp.password=verysecret, camel.component.amqp.includeAmqpAnnotations=true, camel.source.kafka.topic=mytopic, camel.source.path.destinationType=queue, task.class=org.apache.camel.kafkaconnector.amqp.CamelAmqpSourceTask, name=camelamqpsourceconnector}] (org.apache.camel.kafkaconnector.utils.CamelKafkaConnectMain) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:54,985 INFO Using properties from: classpath:application.properties;optional=true (org.apache.camel.main.BaseMainSupport) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:55,175 INFO WorkerSourceTask{id=camelamqpsourceconnector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:55,175 INFO WorkerSourceTask{id=camelamqpsourceconnector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:55,176 ERROR WorkerSourceTask{id=camelamqpsourceconnector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask) [task-thread-camelamqpsourceconnector-0]
org.apache.kafka.connect.errors.ConnectException: Failed to create and start Camel context
at org.apache.camel.kafkaconnector.CamelSourceTask.start(CamelSourceTask.java:144)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.IllegalArgumentException: Cannot find getter method: connectionFactory on bean: class org.apache.camel.component.amqp.AMQPComponent when binding property: connectionFactory.remoteURI
at org.apache.camel.support.PropertyBindingSupport.doBuildPropertyOgnlPath(PropertyBindingSupport.java:282)
at org.apache.camel.support.PropertyBindingSupport.doBindProperties(PropertyBindingSupport.java:210)
at org.apache.camel.support.PropertyBindingSupport.access$100(PropertyBindingSupport.java:88)
at org.apache.camel.support.PropertyBindingSupport$Builder.bind(PropertyBindingSupport.java:1785)
at org.apache.camel.main.MainHelper.setPropertiesOnTarget(MainHelper.java:163)
at org.apache.camel.main.BaseMainSupport.autoConfigurationFromProperties(BaseMainSupport.java:1133)
at org.apache.camel.main.BaseMainSupport.autoconfigure(BaseMainSupport.java:424)
at org.apache.camel.main.BaseMainSupport.postProcessCamelContext(BaseMainSupport.java:472)
at org.apache.camel.main.SimpleMain.doInit(SimpleMain.java:32)
at org.apache.camel.support.service.BaseService.init(BaseService.java:83)
at org.apache.camel.support.service.BaseService.start(BaseService.java:111)
at org.apache.camel.kafkaconnector.CamelSourceTask.start(CamelSourceTask.java:141)
... 8 more
2020-12-26 12:50:55,176 ERROR WorkerSourceTask{id=camelamqpsourceconnector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:55,177 INFO Stopping CamelSourceTask connector task (org.apache.camel.kafkaconnector.CamelSourceTask) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:55,177 INFO CamelSourceTask connector task stopped (org.apache.camel.kafkaconnector.CamelSourceTask) [task-thread-camelamqpsourceconnector-0]
2020-12-26 12:50:55,177 INFO [Producer clientId=connector-producer-camelamqpsourceconnector-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer) [task-thread-camelamqpsourceconnector-0]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.