devshawn / kafka-gitops Goto Github PK
View Code? Open in Web Editor NEW🚀Manage Apache Kafka topics and generate ACLs through a desired state file.
Home Page: https://devshawn.github.io/kafka-gitops
License: Apache License 2.0
🚀Manage Apache Kafka topics and generate ACLs through a desired state file.
Home Page: https://devshawn.github.io/kafka-gitops
License: Apache License 2.0
Hello @devshawn ,
I have created a state.yaml file from the existing system.
Expectation: I should see zero("0") updates/
Output json from "plan" command:
"configs": {
"compression.type": "producer",
"message.downconversion.enable": "true",
"min.insync.replicas": "1",
"segment.jitter.ms": "0",
"cleanup.policy": "compact",
"flush.ms": "9223372036854775807",
"segment.bytes": "1073741824",
"retention.ms": "604800000",
"flush.messages": "9223372036854775807",
"message.format.version": "2.6-IV0",
"max.compaction.lag.ms": "9223372036854775807",
"file.delete.delay.ms": "60000",
"max.message.bytes": "1048588",
"min.compaction.lag.ms": "0",
"message.timestamp.type": "CreateTime",
"preallocate": "false",
"min.cleanable.dirty.ratio": "0.5",
"index.interval.bytes": "4096",
"unclean.leader.election.enable": "false",
"retention.bytes": "-1",
"delete.retention.ms": "86400000",
"segment.ms": "604800000",
"message.timestamp.difference.max.ms": "9223372036854775807",
"segment.index.bytes": "10485760"
}
},
"topicConfigPlans": [
{
"key": "compression.type",
"value": "producer",
"action": "ADD"
},
{
"key": "message.format.version",
"value": "2.6-IV0",
"action": "ADD"
},
{
"key": "max.compaction.lag.ms",
"value": "9223372036854775807",
"action": "ADD"
},
{
"key": "file.delete.delay.ms",
"value": "60000",
"action": "ADD"
},
{
"key": "max.message.bytes",
"value": "1048588",
"action": "ADD"
},
{
"key": "min.compaction.lag.ms",
"value": "0",
"action": "ADD"
},
{
"key": "message.timestamp.type",
"value": "CreateTime",
"action": "ADD"
},
{
"key": "message.downconversion.enable",
"value": "true",
"action": "ADD"
},
{
"key": "min.insync.replicas",
"value": "1",
"action": "ADD"
},
{
"key": "segment.jitter.ms",
"value": "0",
"action": "ADD"
},
{
"key": "preallocate",
"value": "false",
"action": "ADD"
},
{
"key": "min.cleanable.dirty.ratio",
"value": "0.5",
"action": "ADD"
},
{
"key": "index.interval.bytes",
"value": "4096",
"action": "ADD"
},
{
"key": "unclean.leader.election.enable",
"value": "false",
"action": "ADD"
},
{
"key": "retention.bytes",
"value": "-1",
"action": "ADD"
},
{
"key": "delete.retention.ms",
"value": "86400000",
"action": "ADD"
},
{
"key": "flush.ms",
"value": "9223372036854775807",
"action": "ADD"
},
{
"key": "segment.bytes",
"value": "1073741824",
"action": "ADD"
},
{
"key": "retention.ms",
"value": "604800000",
"action": "ADD"
},
{
"key": "segment.ms",
"value": "604800000",
"action": "ADD"
},
{
"key": "message.timestamp.difference.max.ms",
"value": "9223372036854775807",
"action": "ADD"
},
{
"key": "flush.messages",
"value": "9223372036854775807",
"action": "ADD"
},
{
"key": "segment.index.bytes",
"value": "10485760",
"action": "ADD"
}
]
}
],
"aclPlans": []
}
Hi there!
This is such a fantastic project and it's going to be super useful for our usecase. I was just wondering if the standard docker container has MSK IAM authentication support?
Looking at the AWS documentation, you can see an extra class is required with a few extra configuration options. Is this currently supported by kafka-gitops? If not would it be as simple as placing the MSK class in the classpath within the container and setting the required properties?
Required properties:
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
Thanks in advance!
Hi,
It's a great tool.
It would be nice to have possibility to manage only kafka topics, without ACLs.
When I tried to plan
config file with topics configuration I get this error:
[ERROR] Error thrown when attempting to list Kafka ACLs:
org.apache.kafka.common.errors.SecurityDisabledException: No Authorizer is configured on the broker
[ERROR] An error has occurred during the planning process. No plan was created.
For some topics , we are facing idempotency issue. It creates topic first time but during second run we get below issue.
I am not able to reproduce similar issue on other kafka cluster.
./kafka-gitops -v -f new.yml --no-delete apply
Executing apply...
03:39:06.846 [main] INFO com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader - Kafka Config: {bootstrap.servers=localhost:9092, client.id=kafka-gitops}
03:39:06.853 [main] INFO com.devshawn.kafka.gitops.service.ParserService - Parsing desired state file...
03:39:07.606 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic test-OWN-0 does not exist; it will be created.
Applying: [CREATE]
[ERROR] Error thrown when attempting to create a Kafka topic:
org.apache.kafka.common.errors.TopicExistsException: Topic 'test-OWN-0' already exists.
[ERROR] An error has occurred during the apply process.
[ERROR] The apply process has stopped in place. There is no rollback.
[ERROR] Fix the error, re-create a plan, and apply the new plan to continue.
environment: confluent cloud
I was trying to create state yaml so that it would not delete existing configuration:
-state.yaml-
settings:
ccloud:
enabled: true
topics:
defaults:
replication: 1
files:
topics: topics.yaml
services: services.yaml
customServiceAcls:
app-test:
r-t-thg:
name: temp.test.hello-kafka
type: TOPIC
pattern: LITERAL
host: "*"
operation: READ
permission: ALLOW
-topics.yaml-
topics:
temp.test.hello-kafka:
partitions: 3
-services.yaml-
services:
app-test:
type: application
After clearing validation, when executed plan, it always shows removing existing acl/permission. I assumed service-account should exist in "services:" and under "customServiceAcls:" defining it's value as the service-account name as shown above. it's still not picking up. I tried to define principal for acl entry to match the User:{id} of "app-test" but the result is same.
Please help me with an example of import.
Hi, I tried running the docker (through docker compose), and I received a nullpointer exception without explenation. I don't really know if it can be seen as a bug, or missing error message. Basically if I just fill in the name of the topic, and rely on defaults, it gives a nullpointer exception.
I tried to create topics, but with all settings in default, and I got a nullpointer exception.
Command:
$ docker-compose -f kafka-gitops.yml up
Recreating kafka-gitops ... done
Attaching to kafka-gitops
kafka-gitops | 14:32:51.731 [main] INFO com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader - Kafka Config: {bootstrap.servers=localhost:9092, client.id=kafka-gitops}
kafka-gitops | 14:32:51.735 [main] INFO com.devshawn.kafka.gitops.service.ParserService - Parsing desired state file...
kafka-gitops | java.lang.NullPointerException
kafka-gitops | at com.devshawn.kafka.gitops.service.ParserService.parseStateFile(ParserService.java:84)
kafka-gitops | at com.devshawn.kafka.gitops.service.ParserService.parseStateFile(ParserService.java:42)
kafka-gitops | at com.devshawn.kafka.gitops.StateManager.getAndValidateStateFile(StateManager.java:72)
kafka-gitops | at com.devshawn.kafka.gitops.cli.ValidateCommand.call(ValidateCommand.java:26)
kafka-gitops | at com.devshawn.kafka.gitops.cli.ValidateCommand.call(ValidateCommand.java:15)
kafka-gitops | at picocli.CommandLine.executeUserObject(CommandLine.java:1783)
kafka-gitops | at picocli.CommandLine.access$900(CommandLine.java:145)
kafka-gitops | at picocli.CommandLine$RunLast.handle(CommandLine.java:2141)
kafka-gitops | at picocli.CommandLine$RunLast.handle(CommandLine.java:2108)
kafka-gitops | at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1975)
kafka-gitops | at picocli.CommandLine.execute(CommandLine.java:1904)
kafka-gitops | at com.devshawn.kafka.gitops.MainCommand.main(MainCommand.java:69)
kafka-gitops exited with code 1
docker-compose:
version: '3'
services:
kafka-gitops:
image: devshawn/kafka-gitops
container_name: kafka-gitops
environment:
- KAFKA_BOOTSTRAP_SERVERS=localhost:9092
entrypoint: kafka-gitops -v -f /state-test.yml validate
volumes:
- ./state-test.yml:/state-test.yml
my desired state:
topics:
test:
settings:
topics:
defaults:
partitions: 1
replication: 1
configs:
cleanup.policy: delete
retention.ms: -1
blacklist:
prefixed:
- _confluent
Currently I am planning on using this tool on an environment where the authentication with ccloud is actually done through SSO. Our subscription is activated using Azure AAD as our identity source. As such, login with XX_CCLOUD_EMAIL and XX_CCLOUD_PASSWORD is not possible. It is worth noting that currently I am using ccloud clusters that have internet broker endpoints.
To try and circumvent this, I went through all the sign-in process using ccloud login --no-browser --save. This validates my flow and eventually results in a stored and valid login. From here I can use ccloud create topics and service accounts.
after setting the appropriate variables in by bash session, I notice that account creation fails while suggesting that I am not logged into ccloud. Logs below
kafka-gitops --verbose -f sit-state.yaml account
Creating service accounts...
08:54:28.915 [main] INFO com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader - Kafka Config: {zookeeper.connect=, bootstrap.servers=<REDACTED>.azure.confluent.cloud:9092, advertised.listeners=, client.id=kafka-gitops}
08:54:29.011 [main] INFO com.devshawn.kafka.gitops.service.ConfluentCloudService - Using ccloud executable at: ccloud
08:54:29.013 [main] INFO com.devshawn.kafka.gitops.service.ParserService - Parsing desired state file...
08:54:32.615 [main] INFO com.devshawn.kafka.gitops.service.ConfluentCloudService - Fetching service account list from Confluent Cloud via ccloud tool.
08:54:35.114 [main] INFO com.devshawn.kafka.gitops.service.ConfluentCloudService - No content to map due to end-of-input
at [Source: (String)""; line: 1, column: 0]
[ERROR] There was an error listing Confluent Cloud service accounts. Are you logged in?
I also validate that through ccloud I am able to list the service-accounts, currently empty for a new cluster.
ccloud service-account list
Id | Name | Description
+----+------+-------------+
this behavior is weird and not sure if the reason is because of constant changes in confluent cloud or something else.
we created with topics web/cli in the past and now intend to manage in this tool and the topics were created with defaults and no specific configs. So, intending to add the topics in this tool. here is a sample
settings:
ccloud:
enabled: true
topics:
defaults:
# seems to be required for kafka, gives error if removed. but it's not required/shown for confluent cloud cli
replication: 1
blacklist:
prefixed:
- _confluent
topics:
temp.test.hello-kafka:
partitions: 3
# omitting writing some topic names in issue
this topic above was created I think with web ui and then some other created with cli 2 days ago.
one we created with cli 2 days ago does not show any change in plan but this one above shows change in config:
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka exists, it will not be created.
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] cleanup.policy
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] max.compaction.lag.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] max.message.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] min.compaction.lag.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] message.timestamp.type
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] min.insync.replicas
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] segment.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] retention.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] segment.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] message.timestamp.difference.max.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] retention.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] delete.retention.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka exists, it will not be created.
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] cleanup.policy
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] max.compaction.lag.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] max.message.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] min.compaction.lag.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] message.timestamp.type
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] min.insync.replicas
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] segment.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] retention.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] segment.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] message.timestamp.difference.max.ms
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] retention.bytes
14:05:11.117 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] delete.retention.ms
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update
The following actions will be performed:
~ [TOPIC] temp.test.hello-kafka
~ configs:
- cleanup.policy
- max.compaction.lag.ms
- max.message.bytes
- min.compaction.lag.ms
- message.timestamp.type
- min.insync.replicas
- segment.bytes
- retention.ms
- segment.ms
- message.timestamp.difference.max.ms
- retention.bytes
- delete.retention.ms
~ [TOPIC] temp.stage.hello-kafka
~ configs:
- cleanup.policy
- max.compaction.lag.ms
- max.message.bytes
- min.compaction.lag.ms
- message.timestamp.type
- min.insync.replicas
- segment.bytes
- retention.ms
- segment.ms
- message.timestamp.difference.max.ms
- retention.bytes
- delete.retention.ms
when I execute apply, got errors:
14:08:58.853 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka exists, it will not be created.
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] cleanup.policy
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] max.compaction.lag.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] max.message.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] min.compaction.lag.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] message.timestamp.type
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] min.insync.replicas
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] segment.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] retention.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] segment.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] message.timestamp.difference.max.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] retention.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.test.hello-kafka | [REMOVE] delete.retention.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka exists, it will not be created.
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] cleanup.policy
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] max.compaction.lag.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] max.message.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] min.compaction.lag.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] message.timestamp.type
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] min.insync.replicas
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] segment.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] retention.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] segment.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] message.timestamp.difference.max.ms
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] retention.bytes
14:08:58.854 [main] INFO com.devshawn.kafka.gitops.manager.PlanManager - [PLAN] Topic temp.stage.hello-kafka | [REMOVE] delete.retention.ms
Applying: [UPDATE]
~ [TOPIC] temp.test.hello-kafka
~ configs:
- max.compaction.lag.ms
- max.message.bytes
- min.compaction.lag.ms
- message.timestamp.type
- min.insync.replicas
- segment.bytes
- retention.ms
- segment.ms
- message.timestamp.difference.max.ms
- retention.bytes
- delete.retention.ms
[ERROR] Error thrown when attempting to update a Kafka topic config:
org.apache.kafka.common.errors.InvalidRequestException: Invalid config value for resource ConfigResource(type=TOPIC, name='redacted_ temp.test.hello-kafka'): null
[ERROR] An error has occurred during the apply process.
[ERROR] The apply process has stopped in place. There is no rollback.
[ERROR] Fix the error, re-create a plan, and apply the new plan to continue.
I have no choice but to fill up these values from cloud.
It would be nice to have:
Currently, group.id
s are set based on the service name. We'd like to allow developers to set custom group IDs if needed.
Hi,
Im considering using this on our cluster with mTLS with certificates for authentication. Is it possible to pass truststore and keystore when running the kafka-gitops CLI?
Thanks,
I have a kafka server with >100 topics. State appears to be fetched from Kafka serially, so a plan takes 3+ minutes.
We run a multi tenant system. In general environments are the same except for the name of the tenant. To keep hosting overhead down we'd like to have all our tenants share one Kafka cluster. Where we keep the tenants apart using ACLs.
To do this now with Kafka-Gitops we would need to have a lot of repetition in our state file, and thus a lot of messy copy- paste-adapting every time we need to connect a new tentnat to our system. Or when we want to introduce a new application service for all our tenants.
In Terraform you can have variables and such, and generate resources for each item in a list. We've set it up so that we can just add the Tenant to a list and from there on Terraform creates all the resources needed for that tenant.
Are there any plans to add this kind of templating to Kafka-Gitops as well?
Maybe even just a way to share a custom ACL between multiple services or users without having to repeat the whole ACL for each and every one. That would already make the state file a whole lot more concise for us.
I'm defining permissions for schema registry, but after applying none of the permissions listed under customerUserAcls are present in the cluster with a kafka-acls.sh --list
This is the customUserAcls section of my state.yml
file:
customUserAcls:
everyone:
read-heartbeat:
name: heartbeat
principal: User:*
type: TOPIC
pattern: LITERAL
host: "*"
operation: READ
permission: ALLOW
describe-heartbeat:
name: heartbeat
principal: User:*
type: TOPIC
pattern: LITERAL
host: "*"
operation: DESCRIBE
permission: ALLOW
# https://docs.confluent.io/current/schema-registry/security/index.html#authorizing-access-to-the-schemas-topic
schema-registry:
describe-cluster:
name: kafka-cluster
principal: User:schema-registry
type: CLUSTER
pattern: LITERAL
host: "*"
operation: DESCRIBE
permission: ALLOW
create-cluster:
name: kafka-cluster
principal: User:schema-registry
type: CLUSTER
pattern: LITERAL
host: "*"
operation: CREATE
permission: ALLOW
describe-schemas:
name: _schemas
principal: User:schema-registry
type: TOPIC
pattern: LITERAL
host: "*"
operation: DESCRIBE
permission: ALLOW
describeconfigs-schemas:
name: _schemas
principal: User:schema-registry
type: TOPIC
pattern: LITERAL
host: "*"
operation: DESCRIBE_CONFIGS
permission: ALLOW
describe-consumer-offsets:
name: __consumer_offsets
principal: User:schema-registry
type: TOPIC
pattern: LITERAL
host: "*"
operation: DESCRIBE
permission: ALLOW
Any idea what's going on?
Hi,
Right now there is nothing like this for schema management. It might be useful to also allow/declare what topics/subjects use what schemas:
config:
#Where the schemas reside
schemaDir: /tmp/output_dir/
schema:
registry:
url: http://localhost:8081
# username: test
# password: test
schemas:
- relativeLocation: Personnel.json
# This is equals to confluent references: https://docs.confluent.io/platform/current/schema-registry/develop/api.html#post--subjects-(string-%20subject)-versions
references:
- name: com.person.json
subject: person
version: -1
# If left blank, it will auto submit to personnel-value
subjects:
- personnel-raw-value
- personnel-refined-value
- personnel-x-value
This would allow the topic state, and schema state to be managed by one tool/file.
java -jar kafka-schema-gitops-1.0-SNAPSHOT-jar-with-dependencies.jar -i <input_yaml> validate # or execute
I feel the documentation is a little light around usage of kafka-gitops.
Can someone help answer the following questions?
Is it possible to increase the number of partitions on an existing topic? I tried it and it detected no changes.
Is it possible to decrease the number of partitions safely? what would happen if the topic contained records?
Is it possible to rename a topic? What is the default behaviour here? Seems like a dangerous operation since it deletes the old topic and creates the new one.
I created a new release to apply to a new environment after a previous release successfully applied on the dev environment. But the deploy/apply to dev (which had no changes) failed with:
2020-08-13T22:37:28.9431787Z Executing apply...
2020-08-13T22:37:28.9436938Z
2020-08-13T22:37:30.2278932Z com.devshawn.kafka.gitops.exception.PlanIsUpToDateException: The current desired state file matches the actual state of the cluster.
2020-08-13T22:37:30.2285551Z at com.devshawn.kafka.gitops.manager.PlanManager.validatePlanHasChanges(PlanManager.java:179)
2020-08-13T22:37:30.2287816Z at com.devshawn.kafka.gitops.StateManager.apply(StateManager.java:93)
2020-08-13T22:37:30.2291777Z at com.devshawn.kafka.gitops.cli.ApplyCommand.call(ApplyCommand.java:35)
2020-08-13T22:37:30.2292939Z at com.devshawn.kafka.gitops.cli.ApplyCommand.call(ApplyCommand.java:19)
2020-08-13T22:37:30.2294005Z at picocli.CommandLine.executeUserObject(CommandLine.java:1783)
2020-08-13T22:37:30.2295589Z at picocli.CommandLine.access$900(CommandLine.java:145)
2020-08-13T22:37:30.2296457Z at picocli.CommandLine$RunLast.handle(CommandLine.java:2141)
2020-08-13T22:37:30.2297266Z at picocli.CommandLine$RunLast.handle(CommandLine.java:2108)
2020-08-13T22:37:30.2298094Z at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1975)
2020-08-13T22:37:30.2298922Z at picocli.CommandLine.execute(CommandLine.java:1904)
2020-08-13T22:37:30.2299804Z at com.devshawn.kafka.gitops.MainCommand.main(MainCommand.java:69)
2020-08-13T22:37:30.6357738Z
2020-08-13T22:37:30.6415279Z ##[error]Bash exited with code '1'.
Given that this tool is intended to be used with automation to apply desired state configuration, if no changes are required to make the current state match the desired state that should be represented as a success to the controlling automation framework.
I can imagine why one might want to fail if there were no changes and you expected there to be changes, but imo that should be an optional flag, not the default.
The replication factor of topics within a cluster is often static (e.g. almost all are set to 3
, for example).
We should support setting a default and making the replication
field optional.
Provide the ability to configure confluent.placement.constraints
instead of just the number of replicas as described here:
https://docs.confluent.io/current/multi-dc-deployments/multi-region.html
The main use case for us is to be able to set up observers in a different data centre for disaster recovery purposes.
Hi,
We are using self-hosted kafka cluster with GSSAPI(Kerberos) authentication.
Can I add this feature and make a PR?
Given the following statefile:
#Orders
topics:
orders:
partitions: 5
replication: 3
Desired state:
When I update partitions to 6 I expect the tool to increase the number of partitions.
Actual State:
Executing apply...
[SUCCESS] There are no necessary changes; the actual state matches the desired state.
Kafka Version - 2.1.1-cp1
Kafka-gitops version - 0.2.7
As far as I understand users and ACLs are in 2 separate sections:
users:
my-test-user:
principal: User:my-test-user
customUserAcls:
my-test-user:
read-all-kafka:
name: kafka.
type: TOPIC
pattern: PREFIXED
host: "*"
operation: READ
permission: ALLOW
Why are they separate? What about:
users:
my-test-user:
principal: User:my-test-user
acls:
read-all-kafka:
name: kafka.
type: TOPIC
pattern: PREFIXED
host: "*"
operation: READ
permission: ALLOW
Or to be able to share ACLs groups among several users (some kind of RBAC):
users:
my-test-user:
principal: User:my-test-user
roles:
- my-test-role
my-other-user:
principal: User:my-other-user
roles:
- my-test-role
customRoles:
my-test-role:
read-all-kafka:
name: kafka.
type: TOPIC
pattern: PREFIXED
host: "*"
operation: READ
permission: ALLOW
In my state.yaml file, i have the following:
settings:
files:
topics: topics.yaml
services: services.yaml
users: users.yaml
topics:
blacklist:
prefixed:
- __amazon
I have the topics.yaml, services.yaml and users.yaml files in the same location as the state.yaml file. When I run the validate and plan command, it is ignoring the users.yaml file and attempting to delete all my ACLs. If i put the users: hive under the settings: in the state.yaml file, it recognize all the ACLs. am i doing something wrong with the users.yaml file?
using the lastest kafka-gitops binary. below is the content of my users.yaml file:
users:
scramadmin:
principal: User:scramadmin
customUserAcls:
scramadmin:
admin-topic:
name: ""
type: TOPIC
pattern: LITERAL
host: ""
operation: ALL
permission: ALLOW
thanks,
Currently, kafka connect topics follow a specific format for ACL creation. We'd like the ability to specify custom naming strategies.
I am trying to setup the kafka-gitops for my project. We have four kafka broker as per below, once I setup the environment variables export KAFKA_BOOTSTRAP_SERVERS=xxxx.xxxx.xxx.xxx.example.org:9092 on my jump box. Then executing the plan, getting the below error.
Generating execution plan...
11:03:51.023 [main] INFO com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader - Kafka Config: {bootstrap.servers=xxxx.xxxx.xxx.xxx.example.org:9092, client.id=kafka-gitops}
11:03:51.027 [main] INFO com.devshawn.kafka.gitops.service.ConfluentCloudService - Using ccloud executable at: ccloud
11:03:51.028 [main] INFO com.devshawn.kafka.gitops.service.ParserService - Parsing desired state file...
[ERROR] Error thrown when attempting to list Kafka topics:
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[ERROR] An error has occurred during the planning process. No plan was created.
Hi,
We're using Kafka as a shared cluster and as such, we have multiple projects with multiple teams working in parallel. We are planning on having 1 repo per project/system, with their own states and plans, but during our initial testing, we realised that one project is wiping the config of another project since we don't have an overall state file.
Is that the intended purpose?
Is there a way to prevent that?
Hey I want to use this, but need to update the kafka-clients to 2.8.0 in the build path.
How can I rebuild the jar using gradle? Or could we get a new release :)
This is a lot like #51 , but for a different segment of configuration management.
I'm in a situation where it would be nice to only manage ACLs and not topics. We have a handful of tools that generate topics as needed (ksqldb, flink, some in-house applications), and we're in the middle of setting up ACLs for some of our other topics.
Problem is, any time we leave these configurations out of our state file, kafka-gitops tries to delete our topics.
I saw that #54 got merged in, and I wrote #80 that basically copies that for --skip-topics
. I'm wondering if I should instead investigate the possibility of automatically skipping plan segments if their keys are left out of the state file.
For instance, the following will delete all topics
topics: []
services:
example-service:
type: application
produces:
- example-topic
But this one will leave any existing topics alone:
services:
example-service:
type: application
produces:
- example-topic
So given that we have existing clusters is it even possible to use the tool just to create topics?
After setting KAFKA_SASL_JAAS_USERNAME
, KAFKA_SASL_JAAS_PASSWORD
and KAFKA_BOOTSTRAP_SERVERS
running kafka-gitops validate
as well as kafka-gitops plan
results in
java.lang.NullPointerException
at com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader.handleAuthentication(KafkaGitopsConfigLoader.java:62)
at com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader.setConfig(KafkaGitopsConfigLoader.java:41)
at com.devshawn.kafka.gitops.config.KafkaGitopsConfigLoader.load(KafkaGitopsConfigLoader.java:18)
at com.devshawn.kafka.gitops.StateManager.<init>(StateManager.java:63)
at com.devshawn.kafka.gitops.cli.PlanCommand.call(PlanCommand.java:37)
at com.devshawn.kafka.gitops.cli.PlanCommand.call(PlanCommand.java:19)
at picocli.CommandLine.executeUserObject(CommandLine.java:1783)
at picocli.CommandLine.access$900(CommandLine.java:145)
at picocli.CommandLine$RunLast.handle(CommandLine.java:2141)
at picocli.CommandLine$RunLast.handle(CommandLine.java:2108)
at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1975)
at picocli.CommandLine.execute(CommandLine.java:1904)
at com.devshawn.kafka.gitops.MainCommand.main(MainCommand.java:76)
I tried it with both 0.2.14 and 0.2.15.
First of all, thank you for creating this wonderful piece of kit! It is ridiculously useful.
We're using SASL-SCRAM login on our Kafka, but looking at the code this issue would be applicable to all JAAS usernames and passwords.
Our passwords are randomly generated and occasionally contain double quote characters ("
). The config for the gitops user looks somewhat like this:
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="my-gitops-user" password="XXXXXXXXX"YYYYY";
Then when running 'kafka-gitops plan` it gives this error:
[ERROR] Error thrown when creating Kafka admin client:
Value not specified for key 'YYYYY' in JAAS config
I had a look at the code. And it looks easy enough to fix. I'll submit a PR for this later tonight if that's ok with you.
As many users already have a cluster with topics/ACLs defined, we should provide better documentation for using kafka-gitops
against an existing Kafka cluster. This includes improving the documentation around the --no-delete
flag and topic blacklist setting.
Use case: running the tool against an existing Kafka cluster setup with some ACLs/topics already configure in plan
mode (with a minimal state file) currently shows all the ACLs/topics that would be removed. While it is already possible to copy/paste this output and do a little bit of find/replace and other basic edits to come to a new state file that would keep these changes, it would be nice if:
or
import
mode to generate an initial state.yaml
from the current ACL configuration and current topic setup.I can imagine that there are other potential users out there, with a running Kafka setup with historical, manual changes, that would like to switch to a GitOps approach. In my case, it was a rather fresh cluster, so replicating the current state manually into a state.yaml file was rather easy.
I think that the "output" proposal would be a rather fast fix?
Reading Kafka connection config from environment variables is alright when using Docker.
But when running kafka-gitops without Docker (makes me sad), setting environment variables may reveal passwords to anyone being able to read /prod<pid>/environ
Being able to provide an admin.properties
file containing Admin client settings would be very useful.
Hi,
Having a Kafka cluster already in place is there a way to generate the current state file of the cluster?
Thanks.
Dear all,
@devshawn, I just would like some information about the project status as we did not here from you since nearly 2 months now.
I think that you made a really good job and kafka-gitops seems to be interesting for many persons including me. I personally integrated it into my company processes to manage our Kafka clusters and I would really like to see some feature integrated like Schema management etc... Of course, I will collaborate as I already did for #52.
Wouldn't it be the good time to open the management to more than one person ? I can perfectly understand that this is really time consuming to maintain this kind of project alone but if a small group could be built I'm pretty sure that everything might be easier.
What do you think ?
Best,
Jerome
Given that Confluent is planning to sunset ccloud
and has merged its functionality into the latest version of the confluent
CLI (https://docs.confluent.io/ccloud-cli/current/migrate.html), are there plans to migrate this tool to leverage the confluent
CLI behind-the-scenes instead?
Hi!
Is there any way to setup kafka-gitops to work against a kafka cluster inside an AWS EKS cluster?
Right now I am creating a docker image with the kafka-broker + kafka-gitops and running it on EKS, but that means that every time I want to execute a kafka-gitops operation I need to connect inside the EKS cluster.
Is there any option to run the kafka-gitops from outside the EKS?
Thanks so much!
Hey,
It'd be nice to be able to put customServiceAcls
into a services.yaml
to keep things together.
Example:
settings:
ccloud:
enabled: true
files:
topics: topics.yaml
services: services.yaml
Placing the customServiceAcls
block into the services.yaml
file doesn't read the ACLs, they need to be placed into the root state.yaml
which makes things a bit messy, and unorganized. I'd like to be able to place the custom ACLs related to my services with where my services are defined.
Thanks
Hi,
I'm currently looking at locking down the cluster as much as possible, and as such, I was wondering what would the minimal permission for the kafka-gitops user would be?
Could we have an example in the doc on that?
Hi there,
I've added some topic configs to my state file, and when running a plan, it's trying to add the configs that already exist?
Context:
Topic is pre-existing and was created using the ccloud
cli tool. Some configs were passed at the time of topic creation using --config
. I'm looking to 'import' the topic into the state file.
Examples:
State.yaml
topics:
test_partitions:
partitions: 128
replication: 3
configs:
cleanup.policy: delete
retention.ms: 604800000
ccloud kafka topic describe (removed a bunch of output)
Configuration
Name | Value
+-----------------------------------------+---------------------+
cleanup.policy | delete
compression.type | producer
retention.ms | 604800000
...
Plan:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update
The following actions will be performed:
Topics: 0 to create, 1 to update, 0 to delete.
~ [TOPIC] test_partitions
~ configs:
+ cleanup.policy: delete
+ retention.ms: 604800000
ACLs: 0 to create, 0 to update, 0 to delete.
Plan: 0 to create, 1 to update, 0 to delete.
My assumption (expected outcome) was that the plan would say There are no necessary changes; the actual state matches the desired state.
and not need to modify anything.
If I remove the additional configs from the state file, it comes back as no necessary changes. Since we created these topics with specific configs, I'd like to have the state file show those configs.
Thanks
I just installed kafka-gitops via brew, and I received the following output:
Warning: Calling bottle :unneeded is deprecated! There is no replacement.
Please report this issue to the devshawn/kafka-gitops tap (not Homebrew/brew or Homebrew/core):
/usr/local/Homebrew/Library/Taps/devshawn/homebrew-kafka-gitops/kafka-gitops.rb:8
The installation was successful, but it was printed in the console 3 times.
I'm using kafka-gitops for managing access across multiple clusters in different environments, and finding that it would be nice to have a way to deploy some state to certain environments and not others. For example, my testing/user-dev cluster might have topics for in-development projects or extra topics for testing different application configurations that I would never want to reach prod, but I still want to manage via desired-state configuration.
Some ideas:
Some dissenting thoughts:
What do you think?
Keys in customUserAcls are silently ignored in case there is no equivalent user defined. Maybe the validate step could check for that.
Hello,
in my state.yaml file, i have this block of sample code:
customUserAcls:
my-test-user:
read-all-kafka:
name: data-
type: TOPIC
pattern: PREFIXED
host: "*"
operation: READ
permission: ALLOW
in the output of the 'plan' switch, it doesn't show the acl being added. Additionally, do I use 'read-all-kafka' if the user only need to read from only 1 topic?
thanks,
Are there any plans to support PREFIXED
consumer groups (service type: application)?
For now I implemented this at in my fork though as I'm not familiar with the code base and only need this for service type application
I didn't touch files related to other service types.
I have generated a current state.yaml for my existing cluster.
I changed some couple of users the permission as "DENY". So I was expecting the "plan" command to show me those many "updates", instead, it is including those records in the plan json file and marks everything else for "REMOVE".
Kindly advise at the earliest.
This currently is not allowing me to deploy onto our server, so we can include this into our process permanently.
Thanks,
Jay.
when needed to specify entries for a service account but operations on same topic/group/resource, there is repetition. could it be possible to add operations in a single entry:
customServiceAcls:
dev_mgmt-test:
d-cg-thg:
name: test.hello-group
type: GROUP
pattern: LITERAL
host: "*"
operation: DESCRIBE,DESCRIBE_CONFIGS,READ
permission: ALLOW
Hi!
While using apply
, I found that I misspelled the resource_pattern
field as PREFIX
when it should have been PREFIXED
and got this error.
java.lang.IllegalArgumentException: No enum constant org.apache.kafka.common.resource.PatternType.PREFIX
...
I did run the validate
command beforehand, but it did not catch this error. I think validate should check enum fields to make sure they are valid before declaring that the state file is ready.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.