Giter Club home page Giter Club logo

camel-kafka-connector-examples's Introduction

This is camel-kafka-connector examples repository

List of existing examples:

  • AMQP source and sink examples

  • ArangoDB sink example

  • AWS-S3 to JMS example

  • AWS2-IAM sink multiple examples

  • AWS2-lambda sink example

  • AWS2-Kinesis-firehose sink example

  • AWS2-Kinesis source and sink example

  • AWS2-KMS sink examples

  • AWS2-S3 source and sink examples

  • AWS2-SNS sink example

  • AWS2-SQS source and sink examples

  • Braintree sink example

  • Couchbase sink example

  • CQL source and sink examples

  • Cron source example

  • Docker source and sink examples

  • Exec sink example

  • File source and sink examples

  • FTP source and sink examples

  • FTPS source and sink examples

  • Git source multiple examples

  • Github source multiple examples

  • Google Calendar Stream source example

  • Google Mail Stream source example

  • Google PubSub source and sink examples

  • Google Sheets Stream source examples

  • Infinispan source and sink examples

  • JDBC sink example

  • MinIO source and sink examples

  • Nats source and sink examples

  • NSQ source and sink examples

  • PGEvent source example

  • Quartz source example

  • RabbitMQ source and sink examples

  • SCP sink example

  • SFTP source and sink examples

  • Slack source, sink and apicurio registry example

  • SSH source and sink examples

  • SQL source and sink examples

  • Telegram source and sink examples

  • Twitter Direct message source and sink examples

  • Twitter Search source example

  • Twitter Timeline source and sink examples

camel-kafka-connector-examples's People

Contributors

1984shekhar avatar asmigala avatar avano avatar luigidemasi avatar orpiske avatar oscerd avatar rgannu avatar tadayosi avatar valdar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

camel-kafka-connector-examples's Issues

Split Examples into 0.11.x and 1.0.x

Since we have two LTS release train now, we should have two folders supporting the two different release:

  • 0.11.x with old approach
  • 1.0.x with kamelet approach (where we have Kamelets for connector)

Examples for websocket kafka connector (as a client)

Hello

I am trying to use websocket source from here . My problem is I am not getting any error and even if i provide a non-existent wes socket enspoint like wss://stream.binance.com:9443/ws/randomusdt@kline_1m, the connector is not failing or throwiing any errors. it simply says Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. And i get the same message when i provide a genuine endpoint like wss://stream.binance.com:9443/ws/btcusdt@kline_1m.

Can someone please help or suggest ..i am completely stuck.

The version of the connector i am using is 0.6.1

Camel AWS S3 source connector - Error org.reflections.ReflectionsException: Scanner SubTypesScanner was not configured

While running the Kafka Connect - AWS S3 Source Connector

connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties $KAFKA_HOME/config//opt/client/Kafka/kafka/config/camel-properties/docs/examples/CamelAwss3sourceSourceConnector.properties

[2022-11-16 13:37:19,764] INFO Loading plugin from: /opt/client/Kafka/kafka/config/camel-properties/camel-aws-s3-source-kafka-connector/kotlin-stdlib-common-1.3.20.jar (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:241)
[2022-11-16 13:37:19,765] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130)
org.reflections.ReflectionsException: Scanner SubTypesScanner was not configured
at org.reflections.Store.get(Store.java:39)
at org.reflections.Store.get(Store.java:61)
at org.reflections.Store.get(Store.java:46)
at org.reflections.Store.getAll(Store.java:93)
at org.reflections.Reflections.getSubTypesOf(Reflections.java:404)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:345)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:330)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:263)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:255)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:224)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:201)
at org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:60)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:79)

Add HTTP Connector Example

I'm having a lot of trouble trying to figure out how to use the HTTP connector with an endpoint which needs a bearer token in the Authorization header. Please add some http examples using auth tokens

Fhir Source example is using 2 properties not part of the catalog

Not found somethign related, should be removed from example? or it is missing from the catalog?

Should it be camel.source.endpoint.serverUrl?

Docker examples contains 3 parameters not part of the catalog

Example for minio with ByteArrayConverter

Can you please add an exmaple for minio connector with ByteArrayConverter ?

I'm trying to move an image object from minio to kafka but it looks like the minio connector is consuming the object as 'STRING' and ByteArrayConecter is not able to process it.

Kamelet with id cassandra-sink not found in locations: classpath:/kamelets

First, thank you so much for creating these excellent Camel libraries.

Second, this may not be the right place to put this question. I don't have an issue with the connector. My inexperience is the real issue.

Background: I'm wanting to sink a JSON message from a Kafka (open-source) topic to Cassandra (open-source) table with Kafka Connect

Kafka
Version: 2.13-3.0.0 (Scala: 2.13, Kafka: 3.0.0)
IP/Ports: 10.188.5.86:9092,10.188.5.86:9093,10.188.5.86:9094
Topic: pie (./bin/kafka-topics.sh --bootstrap-server 10.188.5.86:9092,10.188.5.86:9093,10.188.5.86:9094 --create --replication-factor 1 --partitions 1 --topic pie)

JSON
Simple Pie Data:
{"Type:":"Apple","Invented":1381}
{"Type:":"Pecan","Invented":1870}
{"Type:":"Cherry","Invented":1500}

Cassandra
Version:4.0.3
IP:10.66.16.10
Port:9042
User:cameldevloader
Pass:newpassword

USE dev;
CREATE TABLE pie(type varchar, invented double, PRIMARY KEY (type));
INSERT INTO pie (type, invented) VALUES ('Chess',1750);
SELECT * FROM pie;
yields: Chess 1750.0

Kafka Connect

Using this Camel connector:
https://camel.apache.org/camel-kafka-connector/1.0.x/reference/connectors/camel-cassandra-sink-kafka-sink-connector.html

Downloaded from:
https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-cassandra-sink-kafka-connector/1.0.0/camel-cassandra-sink-kafka-connector-1.0.0-package.tar.gz
copied the camel-cassandra-sink-kafka-connector-1.0.0-package.tar.gz to Kafka server at location /root/sofware/camel-cassandra-sink-kafka-connector-1.0.0-package.tar.gz

on my Kafka server in the /root/connectors/ directory unzipped:
~/connectors/tar -xvzf ../software/camel-cassandra-sink-kafka-connector-1.0.0-package.tar.gz

Updated /root/kafka/config/connect-standalone.properties
--######################################################
--# Licensed to the Apache Software Foundation (ASF) under one or more
--# contributor license agreements. See the NOTICE file distributed with
--# this work for additional information regarding copyright ownership.
--# The ASF licenses this file to You under the Apache License, Version 2.0
--# (the "License"); you may not use this file except in compliance with
--# the License. You may obtain a copy of the License at
--#
--# http://www.apache.org/licenses/LICENSE-2.0
--#
--# Unless required by applicable law or agreed to in writing, software
--# distributed under the License is distributed on an "AS IS" BASIS,
--# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
--# See the License for the specific language governing permissions and
--# limitations under the License.

--# These are defaults. This file just demonstrates how to override some settings.
bootstrap.servers=10.188.5.86:9092,10.188.5.86:9093,10.188.5.86:9094

--# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
--# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
--# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
--# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=true

offset.storage.file.filename=/tmp/connect.offsets
--# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000

--# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
--# (connectors, converters, transformations). The list should consist of top level directories that include
--# any combination of:
--# a) directories immediately containing jars with plugins and their dependencies
--# b) uber-jars with plugins and their dependencies
--# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
--# Note: symlinks will be followed to discover dependencies or plugins.
--# Examples:
plugin.path=/root/connectors/
--######################################################

Created (from docs/examples) /root/myconnectorproperties/CamelCassandraPieSink.properties
--######################################################
--##---------------------------------------------------------------------------
--## Licensed to the Apache Software Foundation (ASF) under one or more
--## contributor license agreements. See the NOTICE file distributed with
--## this work for additional information regarding copyright ownership.
--## The ASF licenses this file to You under the Apache License, Version 2.0
--## (the "License"); you may not use this file except in compliance with
--## the License. You may obtain a copy of the License at
--##
--## http://www.apache.org/licenses/LICENSE-2.0
--##
--## Unless required by applicable law or agreed to in writing, software
--## distributed under the License is distributed on an "AS IS" BASIS,
--## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
--## See the License for the specific language governing permissions and
--## limitations under the License.
--## ---------------------------------------------------------------------------

name=CamelCassandra-sinkSinkConnector
connector.class=org.apache.camel.kafkaconnector.cassandrasink.CamelCassandrasinkSinkConnector
tasks.max=1

--# use the kafka converters that better suit your needs, these are just defaults:
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter

--# comma separated topics to get messages from
topics=pie

--# mandatory properties (for a complete properties list see the connector documentation):

--# Hostname(s) cassandra server(s). Multiple hosts can be separated by comma. Example: localhost
camel.kamelet.cassandra-sink.connectionHost=10.66.16.10
--# Port number of cassandra server(s) Example: 9042
camel.kamelet.cassandra-sink.connectionPort=9042
--# user
camel.kamelet.cassandra-sink.username=cameldevloader
--# password
camel.kamelet.cassandra-sink.password=newpassword
--# Keyspace to use Example: customers
camel.kamelet.cassandra-sink.keyspace=dev
--# The query to execute against the Cassandra cluster table
camel.kamelet.cassandra-sink.query=INSERT INTO pie (type, invented) VALUES (?,?)
--######################################################

Connect command:
/kafka # ./bin/connect-standalone.sh /root/kafka/config/connect-standalone.properties /root/myconnectorproperties/CamelCassandraPieSink.properties

Error: Kamelet with id cassandra-sink not found in locations: classpath:/kamelets

Full log attache
pie_result.txt
d

Add CamelGooglesheetsSinkConnector example

I am using the google sheet sink connector of camel-Kafka-connector version 0.11.x. I am deploying a camel-Kafka-connector in Kubernetes. I am using the following configuration in the Kubernetes kind Kafka connector. I am following the https://camel.apache.org/camel-kafka-connector/0.11.x/reference/connectors/camel-google-sheets-kafka-sink-connector.html page to set up but facing an error. Please add an example of how to use the google sheet sink in Kafka connector. Below is an error that was also posted. Any help would be appreciated.

kind: KafkaConnector
apiVersion: kafka.strimzi.io/v1beta2
metadata:
name: google-sheet-sink
labels:
# must match connect cluster name
strimzi.io/cluster: my-connect
spec:
tasksMax: 1
class: org.apache.camel.kafkaconnector.googlesheets.CamelGooglesheetsSinkConnector
config:
topics: my-topic
camel.sink.path.apiName: data
camel.sink.path.methodName: append
camel.sink.endpoint.applicationName: kafka-connect
camel.sink.endpoint.clientId: *****
camel.sink.endpoint.clientSecret: *****
camel.sink.endpoint.accessToken: *****
camel.sink.endpoint.refreshToken: *****
camel.sink.endpoint.spreadsheetId: *****
camel.component.google-sheets-stream.range: A1:B2

Error:
ERROR Failed delivery for (MessageId: E91EB7DEADAF674-0000000000000000 on ExchangeId: E91EB7DEADAF674-0000000000000000). Exhausted after delivery attempt: 1 caught: org.apache.camel.RuntimeCamelException: Missing properties for append, need one or more from [values, range]

Message History (complete message history is disabled)

RouteId ProcessorId Processor Elapsed (ms)
[route10 ] [route10 ] [ ] [ 5]
...
[route10 ] [toD10 ] [google-sheets:data/append?accessToken=xxxxxx ] [ 0]

Stacktrace

(org.apache.camel.processor.errorhandler.DefaultErrorHandler) [task-thread-google-sheet-sink-0]
org.apache.camel.RuntimeCamelException: Missing properties for append, need one or more from [values, range]

Source Sftp connector

Hello

I am getting message.max.bytes error when trying to read logs files (pain text) of 300MB.
As far as I know, the connector moves the whole file in one record to kafka.
Is any option to split the file in mini batches and have an atomic transaction. In case the connector fails in the middle of the processing, reprocess only from where it left?

Regards

camel-tar.gzfile Not Found

In the example, aws2s3 Readme.adoc, the camel-zipfile artifact was updated to

...
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-tar.gzfile</artifactId>
      <version>3.4.2</version>
    </dependency>
...

Was this a mistake? This may have been an error when replacing .zip target with .tar.gz. It appears to work with ckc version 0.9.0 and

    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-zipfile</artifactId>
      <version>3.5.0</version>
    </dependency>

Minio-Connector example with deleteAfterRead, retrieve only once and consume large message size.

Hi,

I'm trying to replicate objects in minio from one datacenter to other via kafka using the minio source and sink connectors.

While replicating I do not want the object to be deleted from source minio (deleteAfterRead=false) and I also don't want the object to be retrieved again and again in every poll. Can we have an example configuration to achieve this?

Also if the object is of size 20MB , Is there a way to split them into multiple 1 MB size messages and then retrieve or do I have to increase the max.request.size to retrieve it?

Add Mongodb examples

Sample configuration for the MongoDB sink:

connector.class=org.apache.camel.kafkaconnector.mongodb.CamelMongodbSinkConnector
camel.sink.endpoint.operation=insert
camel.sink.endpoint.collection=testRecords
tasks.max=1
topics=org.apache.camel.kafkaconnector.mongodb.sink.CamelSinkMongoDBITCase
name=CamelMongoDBSinkConnector
value.converter=org.apache.kafka.connect.storage.StringConverter
camel.sink.path.connectionBean=mongo
camel.beans.mongo=#class:com.mongodb.client.MongoClients#create('mongodb://localhost:32772')
camel.sink.endpoint.database=testDB
key.converter=org.apache.kafka.connect.storage.StringConverter
An important bit for the mongoDB sink is to configure the camel.beans.mongo property correctly. In this case, it is setting the bean to be created by calling the method create passing the URL of the mongodb instance (mongodb://localhost:32772). This should be configured accordingly to your setup.

FHIR Source Connector config fails ro resolve endpoint

I want to add a source connector for polling Bundles from a FHIR API. Therefore I put together the following config:

{
  "name": "my-fhir-source-connector",
  "config": {
      "connector.class": "org.apache.camel.kafkaconnector.fhir.CamelFhirSourceConnector",
      "tasks.max": 1,
      "topic": "fhir.input",
      "camel.source.path.apiName": "OPERATION",
      "camel.source.path.methodName": "$customOperation",
      "camel.source.endpoint.serverUrl": "https://url-of-fhir-api-com/fhir",
      "camel.source.endpoint.delay": 60000,
      "camel.source.endpoint.password": "password",
      "camel.source.endpoint.username": "user",
      "camel.component.fhir.fhirVersion": "R4"
  }
}

(This results in the error message listed at the end.)

The $customOperation returns a Collection Bundle.
Manually I am able to query this bundle by doing a POST on:
https://url-of-fhir-api-com/fhir/$customOperation
with a request body:

<Parameters xmlns="http://hl7.org/fhir">
    <parameter>
        <name value="domain" />
        <valueString value="customDomain" />
    </parameter>
</Parameters>
  1. But I see no possibility to add a request body to the connector config. I played around with camel.source.endpoint.inBody but cannot get any result. How can I add parameters/body?

  2. And another question is, if the endpoint URL is correct? Because it seems the connector constructs the URL like:
    https://url-of-fhir-api-com/fhir/operation/$customOperation
    instead of
    https://url-of-fhir-api-com/fhir/$customOperation
    Is this also a problem?

  3. Also, the log does not show the full URL it is accessing. Does it ignore the serverUrl property?

The error log:

org.apache.kafka.connect.errors.ConnectException: Failed to create and start Camel context
	at org.apache.camel.kafkaconnector.CamelSourceTask.start(CamelSourceTask.java:175)
	at org.apache.kafka.connect.runtime.WorkerSourceTask.initializeAndStart(WorkerSourceTask.java:225)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route route1: Route(route1)[From[fhir:operation/$customOperation?dela... because of Failed to resolve endpoint: fhir://operation/$customOperation?delay=60000&password=xxxxxx due to: No matching method for operation/$customOperation, with arguments []
	at org.apache.camel.reifier.RouteReifier.createRoute(RouteReifier.java:80)
	at org.apache.camel.impl.DefaultModelReifierFactory.createRoute(DefaultModelReifierFactory.java:49)
	at org.apache.camel.impl.DefaultCamelContext.startRouteDefinitions(DefaultCamelContext.java:826)
	at org.apache.camel.impl.DefaultCamelContext.startRouteDefinitions(DefaultCamelContext.java:716)
	at org.apache.camel.impl.engine.AbstractCamelContext.doInit(AbstractCamelContext.java:2757)
	at org.apache.camel.support.service.BaseService.init(BaseService.java:83)
	at org.apache.camel.impl.engine.AbstractCamelContext.init(AbstractCamelContext.java:2475)
	at org.apache.camel.support.service.BaseService.start(BaseService.java:111)
	at org.apache.camel.impl.engine.AbstractCamelContext.start(AbstractCamelContext.java:2494)
	at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:245)
	at org.apache.camel.main.SimpleMain.doStart(SimpleMain.java:43)
	at org.apache.camel.support.service.BaseService.start(BaseService.java:119)
	at org.apache.camel.kafkaconnector.CamelSourceTask.start(CamelSourceTask.java:172)
	... 8 more
Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: fhir://operation/$customOperation?delay=60000&password=xxxxxx due to: No matching method for operation/$customOperation, with arguments []
	at org.apache.camel.impl.engine.AbstractCamelContext.doGetEndpoint(AbstractCamelContext.java:962)
	at org.apache.camel.impl.engine.AbstractCamelContext.getEndpoint(AbstractCamelContext.java:844)
	at org.apache.camel.support.CamelContextHelper.getMandatoryEndpoint(CamelContextHelper.java:58)
	at org.apache.camel.reifier.AbstractReifier.resolveEndpoint(AbstractReifier.java:177)
	at org.apache.camel.reifier.RouteReifier.doCreateRoute(RouteReifier.java:94)
	at org.apache.camel.reifier.RouteReifier.createRoute(RouteReifier.java:74)
	... 20 more
Caused by: java.lang.IllegalArgumentException: No matching method for operation/$customOperation, with arguments []
	at org.apache.camel.support.component.AbstractApiEndpoint.initState(AbstractApiEndpoint.java:172)
	at org.apache.camel.support.component.AbstractApiEndpoint.configureProperties(AbstractApiEndpoint.java:134)
	at org.apache.camel.support.DefaultComponent.setProperties(DefaultComponent.java:425)
	at org.apache.camel.support.component.AbstractApiComponent.createEndpoint(AbstractApiComponent.java:90)
	at org.apache.camel.support.DefaultComponent.createEndpoint(DefaultComponent.java:171)
	at org.apache.camel.impl.engine.AbstractCamelContext.doGetEndpoint(AbstractCamelContext.java:928)
	... 25 more

Thanks in advance for any help or hints

Provide minimal custom converters for some examples

For example, Camel Twitter connector examples get a lengthy tweet message format like this with the default org.apache.kafka.connect.storage.StringConverter.

StatusJSONImpl{createdAt=Thu Jan 21 19:40:53 UTC 2021, id=1352340411239559170, text='We released Apache Camel 3.7.1 (LTS) today, a new patch release with 29 fixes and improvements: https://t.co/osOTD4uY5k #apachecamel', source='TweetDeck', isTruncated=false, inReplyToStatusId=-1, inReplyToUserId=-1, isFavorited=false, isRetweeted=false, favoriteCount=37, inReplyToScreenName='null', geoLocation=null, place=null, retweetCount=18, isPossiblySensitive=false, lang='en', contributorsIDs=[], retweetedStatus=null, userMentionEntities=[], urlEntities=[URLEntityJSONImpl{url='https://t.co/osOTD4uY5k', expandedURL='https://camel.apache.org/blog/2021/01/RELEASE-3.7.1/', displayURL='camel.apache.org/blog/2021/01/R…'}], hashtagEntities=[HashtagEntityJSONImpl{text='apachecamel'}], mediaEntities=[], symbolEntities=[], currentUserRetweetId=-1, user=UserJSONImpl{id=1086624104466341888, name='Apache Camel', email='null', screenName='ApacheCamel', location='', description='Apache Camel™ is a versatile open-source integration framework based on Enterprise Integration Patterns from @theASF', isContributorsEnabled=false, profileImageUrl='http://pbs.twimg.com/profile_images/1090189047367192577/xWt1RFo6_normal.jpg', profileImageUrlHttps='https://pbs.twimg.com/profile_images/1090189047367192577/xWt1RFo6_normal.jpg', isDefaultProfileImage=false, url='https://t.co/cH7LVwWAdJ', isProtected=false, followersCount=1629, status=null, profileBackgroundColor='000000', profileTextColor='000000', profileLinkColor='FAB81E', profileSidebarFillColor='000000', profileSidebarBorderColor='000000', profileUseBackgroundImage=false, isDefaultProfile=false, showAllInlineMedia=false, friendsCount=0, createdAt=Sat Jan 19 13:59:09 UTC 2019, favouritesCount=30, utcOffset=-1, timeZone='null', profileBackgroundImageUrl='http://abs.twimg.com/images/themes/theme1/bg.png', profileBackgroundImageUrlHttps='https://abs.twimg.com/images/themes/theme1/bg.png', profileBackgroundTiled=false, lang='null', statusesCount=326, isGeoEnabled=false, isVerified=false, translator=false, listedCount=28, isFollowRequestSent=false, withheldInCountries=null}, withHeldInCountries=null, quotedStatusId=-1, quotedStatus=null}

It would be great for the examples (if appropriate) to provide a custom converter so that it can simplify how the example looks at work and also demonstrate how the user can customise messages with CKC.

Example how to export from Kafka to S3 and import back from S3 to Kafka both key and value

Hello,

I think you are doing a great job with all the kafka connectors that you are building!

Some time ago I played with the org.apache.camel.kafkaconnector.awss3.CamelAwss3SinkConnector connector and successfully managed to export records from my topic to S3.

I now want to be able to import the records from S3 back to kafka. That is fairly straight forward, but there is one tricky part - I want to keep the keys.

For example if I had a record with

key="key_UUID"
value="value_JSON"

I want to export both the key and the value, and then be able to create the same record with the same key and same value. Is this possible, and if so can you give me an example configuration for the S3 Sink/Source.

Thanks
Mihail

Failed to run the github-source-events example

Hi team, I run into an issue when I run the github-source-events example, can you help me on this?

How I run the example

I think I strictly follow github/github-source-events/README.adoc, the only difference is I use Docker for local Kafka environment, the connect-standalone.sh script is executed inside the Kafka container.

The issue I run into

At the last step, when I run the commad, it throws an error.
Command: {path}/connect-standalone.sh {path}/connect-standalone.properties {path}/CamelGithubSourceConnector.properties
Error: Failed to find any class that implements Connector and which name matches org.apache.camel.kafkaconnector.github.CamelGithubSourceConnector

Files from untar.gz github-extended-0.11.0-package.tar.gz

image

connect-standalone.properties

image

CamelGithubSourceConnector.properties

image

Detailed Error

[2021-11-03 10:25:41,496] ERROR Failed to create job for /milkchocolate/kafka-connector-properties/CamelGithubSourceConnector.properties (org.apache.kafka.connect.cli.ConnectStandalone:107)
[2021-11-03 10:25:41,497] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:117)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches org.apache.camel.kafkaconnector.github.CamelGithubSourceConnector, available connectors are: PluginDesc{klass=class org.apache.camel.kafkaconnector.CamelSinkConnector, name='org.apache.camel.kafkaconnector.CamelSinkConnector', version='0.11.0', encodedVersion=0.11.0, type=sink, typeName='sink', location='file:/milkchocolate/github-extended/target/github-extended/camel-kafka-connector-0.11.0.jar'}, PluginDesc{klass=class org.apache.camel.kafkaconnector.CamelSourceConnector, name='org.apache.camel.kafkaconnector.CamelSourceConnector', version='0.11.0', encodedVersion=0.11.0, type=source, typeName='source', location='file:/milkchocolate/github-extended/target/github-extended/camel-kafka-connector-0.11.0.jar'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='3.0.0', encodedVersion=3.0.0, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorCheckpointConnector, name='org.apache.kafka.connect.mirror.MirrorCheckpointConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorHeartbeatConnector, name='org.apache.kafka.connect.mirror.MirrorHeartbeatConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorSourceConnector, name='org.apache.kafka.connect.mirror.MirrorSourceConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='3.0.0', encodedVersion=3.0.0, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='3.0.0', encodedVersion=3.0.0, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:115)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:99)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:114)
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches org.apache.camel.kafkaconnector.github.CamelGithubSourceConnector, available connectors are: PluginDesc{klass=class org.apache.camel.kafkaconnector.CamelSinkConnector, name='org.apache.camel.kafkaconnector.CamelSinkConnector', version='0.11.0', encodedVersion=0.11.0, type=sink, typeName='sink', location='file:/milkchocolate/github-extended/target/github-extended/camel-kafka-connector-0.11.0.jar'}, PluginDesc{klass=class org.apache.camel.kafkaconnector.CamelSourceConnector, name='org.apache.camel.kafkaconnector.CamelSourceConnector', version='0.11.0', encodedVersion=0.11.0, type=source, typeName='source', location='file:/milkchocolate/github-extended/target/github-extended/camel-kafka-connector-0.11.0.jar'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='3.0.0', encodedVersion=3.0.0, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorCheckpointConnector, name='org.apache.kafka.connect.mirror.MirrorCheckpointConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorHeartbeatConnector, name='org.apache.kafka.connect.mirror.MirrorHeartbeatConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorSourceConnector, name='org.apache.kafka.connect.mirror.MirrorSourceConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='3.0.0', encodedVersion=3.0.0, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='3.0.0', encodedVersion=3.0.0, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='3.0.0', encodedVersion=3.0.0, type=source, typeName='source', location='classpath'}
at org.apache.kafka.connect.runtime.isolation.Plugins.connectorClass(Plugins.java:200)
at org.apache.kafka.connect.runtime.isolation.Plugins.newConnector(Plugins.java:172)
at org.apache.kafka.connect.runtime.AbstractHerder.lambda$getConnector$4(AbstractHerder.java:653)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
at org.apache.kafka.connect.runtime.AbstractHerder.getConnector(AbstractHerder.java:653)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:426)
at org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$2(AbstractHerder.java:362)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)

camel-kafka-connector-examples

Open the SQL configuration file at $EXAMPLES/sql/sql-source/config/CamelSqlSourceConnector.properties

name=CamelSqlSourceConnector
connector.class=org.apache.camel.kafkaconnector.sql.CamelSqlSourceConnector
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter

topics=mytopic

camel.component.sql.dataSource.user=postgres
camel.component.sql.dataSource.password=mysecretpassword
camel.component.sql.dataSource.serverName=172.17.0.2
camel.component.sql.dataSource=#class:org.postgresql.ds.PGSimpleDataSource

camel.source.path.query=select * from accounts

Please tell me how to select database

i try to use

camel.component.sql.dataSource.serverName=172.17.0.2:5432/test

but faild

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.