Giter Club home page Giter Club logo

terraform-aws-msk-apache-kafka-cluster's Introduction

Project Banner

Latest ReleaseLast UpdatedSlack Community

Terraform module to provision Amazon Managed Streaming for Apache Kafka

Note: this module is intended for use with an existing VPC. To create a new VPC, use terraform-aws-vpc module.

NOTE: Release 0.8.0 contains breaking changes that will result in the destruction of your existing MSK cluster. To preserve the original cluster, follow the instructions in the 0.7.x to 0.8.x+ migration path.

Tip

๐Ÿ‘ฝ Use Atmos with Terraform

Cloud Posse uses atmos to easily orchestrate multiple environments using Terraform.
Works with Github Actions, Atlantis, or Spacelift.

Watch demo of using Atmos with Terraform
Example of running atmos to manage infrastructure from our Quick Start tutorial.

Usage

Here's how to invoke this example module in your projects

module "kafka" {
  source = "cloudposse/msk-apache-kafka-cluster/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  kafka_version        = "3.3.2"
  namespace            = "eg"
  stage                = "prod"
  name                 = "app"
  vpc_id               = "vpc-XXXXXXXX"
  subnet_ids           = ["subnet-XXXXXXXXX", "subnet-YYYYYYYY"]
  broker_per_zone      = 2
  broker_instance_type = "kafka.m5.large"

  # A list of IDs of Security Groups to associate the created resource with, in addition to the created security group
  associated_security_group_ids = ["sg-XXXXXXXXX", "sg-YYYYYYYY"]
  
  # A list of IDs of Security Groups to allow access to the cluster
  allowed_security_group_ids = ["sg-XXXXXXXXX", "sg-YYYYYYYY"]
}

Important

In Cloud Posse's examples, we avoid pinning modules to specific versions to prevent discrepancies between the documentation and the latest released versions. However, for your own projects, we strongly advise pinning each module to the exact version you're using. This practice ensures the stability of your infrastructure. Additionally, we recommend implementing a systematic approach for updating versions to avoid unexpected changes.

Examples

Here is an example of using this module:

Makefile Targets

Available targets:

  help                                Help screen
  help/all                            Display help for all targets
  help/short                          This help short screen
  lint                                Lint terraform code

Requirements

Name Version
terraform >= 1.0.0
aws >= 4.0

Providers

Name Version
aws >= 4.0

Modules

Name Source Version
hostname cloudposse/route53-cluster-hostname/aws 0.13.0
security_group cloudposse/security-group/aws 2.2.0
this cloudposse/label/null 0.25.0

Resources

Name Type
aws_appautoscaling_policy.default resource
aws_appautoscaling_target.default resource
aws_msk_cluster.default resource
aws_msk_configuration.config resource
aws_msk_scram_secret_association.default resource
aws_msk_broker_nodes.default data source

Inputs

Name Description Type Default Required
additional_security_group_rules A list of Security Group rule objects to add to the created security group, in addition to the ones
this module normally creates. (To suppress the module's rules, set create_security_group to false
and supply your own security group(s) via associated_security_group_ids.)
The keys and values of the objects are fully compatible with the aws_security_group_rule resource, except
for security_group_id which will be ignored, and the optional "key" which, if provided, must be unique and known at "plan" time.
For more info see https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule
and https://github.com/cloudposse/terraform-aws-security-group.
list(any) [] no
additional_tag_map Additional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
map(string) {} no
allow_all_egress If true, the created security group will allow egress on all ports and protocols to all IP addresses.
If this is false and no egress rules are otherwise specified, then no egress will be allowed.
bool true no
allowed_cidr_blocks A list of IPv4 CIDRs to allow access to the security group created by this module.
The length of this list must be known at "plan" time.
list(string) [] no
allowed_security_group_ids A list of IDs of Security Groups to allow access to the security group created by this module.
The length of this list must be known at "plan" time.
list(string) [] no
associated_security_group_ids A list of IDs of Security Groups to associate the created resource with, in addition to the created security group.
These security groups will not be modified and, if create_security_group is false, must have rules providing the desired access.
list(string) [] no
attributes ID element. Additional attributes (e.g. workers or cluster) to add to id,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the delimiter
and treated as a single ID element.
list(string) [] no
autoscaling_enabled To automatically expand your cluster's storage in response to increased usage, you can enable this. More info bool true no
broker_dns_records_count This variable specifies how many DNS records to create for the broker endpoints in the DNS zone provided in the zone_id variable.
This corresponds to the total number of broker endpoints created by the module.
Calculate this number by multiplying the broker_per_zone variable by the subnet count.
This variable is necessary to prevent the Terraform error:
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created.
number 0 no
broker_instance_type The instance type to use for the Kafka brokers string n/a yes
broker_per_zone Number of Kafka brokers per zone number 1 no
broker_volume_size The size in GiB of the EBS volume for the data drive on each broker node number 1000 no
certificate_authority_arns List of ACM Certificate Authority Amazon Resource Names (ARNs) to be used for TLS client authentication list(string) [] no
client_allow_unauthenticated Enable unauthenticated access bool false no
client_broker Encryption setting for data in transit between clients and brokers. Valid values: TLS, TLS_PLAINTEXT, and PLAINTEXT string "TLS" no
client_sasl_iam_enabled Enable client authentication via IAM policies. Cannot be set to true at the same time as client_tls_auth_enabled bool false no
client_sasl_scram_enabled Enable SCRAM client authentication via AWS Secrets Manager. Cannot be set to true at the same time as client_tls_auth_enabled bool false no
client_sasl_scram_secret_association_arns List of AWS Secrets Manager secret ARNs for SCRAM authentication list(string) [] no
client_sasl_scram_secret_association_enabled Enable the list of AWS Secrets Manager secret ARNs for SCRAM authentication bool true no
client_tls_auth_enabled Set true to enable the Client TLS Authentication bool false no
cloudwatch_logs_enabled Indicates whether you want to enable or disable streaming broker logs to Cloudwatch Logs bool false no
cloudwatch_logs_log_group Name of the Cloudwatch Log Group to deliver logs to string null no
context Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as null to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
any
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
no
create_security_group Set true to create and configure a new security group. If false, associated_security_group_ids must be provided. bool true no
custom_broker_dns_name Custom Route53 DNS hostname for MSK brokers. Use %%ID%% key to specify brokers index in the hostname. Example: kafka-broker%%ID%%.example.com string null no
delimiter Delimiter to be used between ID elements.
Defaults to - (hyphen). Set to "" to use no delimiter at all.
string null no
descriptor_formats Describe additional descriptors to be output in the descriptors output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
{<br> format = string<br> labels = list(string)<br>}
(Type is any so the map values can later be enhanced to provide additional options.)
format is a Terraform format string to be passed to the format() function.
labels is a list of labels, in order, to pass to format() function.
Label values will be normalized before being passed to format() so they will be
identical to how they appear in id.
Default is {} (descriptors output will be empty).
any {} no
enabled Set to false to prevent the module from creating any resources bool null no
encryption_at_rest_kms_key_arn You may specify a KMS key short ID or ARN (it will always output an ARN) to use for encrypting your data at rest string "" no
encryption_in_cluster Whether data communication among broker nodes is encrypted bool true no
enhanced_monitoring Specify the desired enhanced MSK CloudWatch monitoring level. Valid values: DEFAULT, PER_BROKER, and PER_TOPIC_PER_BROKER string "DEFAULT" no
environment ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' string null no
firehose_delivery_stream Name of the Kinesis Data Firehose delivery stream to deliver logs to string "" no
firehose_logs_enabled Indicates whether you want to enable or disable streaming broker logs to Kinesis Data Firehose bool false no
id_length_limit Limit id to this many characters (minimum 6).
Set to 0 for unlimited length.
Set to null for keep the existing setting, which defaults to 0.
Does not affect id_full.
number null no
inline_rules_enabled NOT RECOMMENDED. Create rules "inline" instead of as separate aws_security_group_rule resources.
See #20046 for one of several issues with inline rules.
See this post for details on the difference between inline rules and rule resources.
bool false no
jmx_exporter_enabled Set true to enable the JMX Exporter bool false no
kafka_version The desired Kafka software version.
Refer to https://docs.aws.amazon.com/msk/latest/developerguide/supported-kafka-versions.html for more details
string n/a yes
label_key_case Controls the letter case of the tags keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the tags input.
Possible values: lower, title, upper.
Default value: title.
string null no
label_order The order in which the labels (ID elements) appear in the id.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
list(string) null no
label_value_case Controls the letter case of ID elements (labels) as included in id,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the tags input.
Possible values: lower, title, upper and none (no transformation).
Set this to title and set delimiter to "" to yield Pascal Case IDs.
Default value: lower.
string null no
labels_as_tags Set of labels (ID elements) to include as tags in the tags output.
Default is to include all labels.
Tags with empty values will not be included in the tags output.
Set to [] to suppress all generated tags.
Notes:
The value of the name tag, if included, will be the id, not the name.
Unlike other null-label inputs, the initial setting of labels_as_tags cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
set(string)
[
"default"
]
no
name ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a tag.
The "name" tag is set to the full id string. There is no tag with the value of the name input.
string null no
namespace ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique string null no
node_exporter_enabled Set true to enable the Node Exporter bool false no
preserve_security_group_id When false and security_group_create_before_destroy is true, changes to security group rules
cause a new security group to be created with the new rules, and the existing security group is then
replaced with the new one, eliminating any service interruption.
When true or when changing the value (from false to true or from true to false),
existing security group rules will be deleted before new ones are created, resulting in a service interruption,
but preserving the security group itself.
NOTE: Setting this to true does not guarantee the security group will never be replaced,
it only keeps changes to the security group rules from triggering a replacement.
See the terraform-aws-security-group README for further discussion.
bool false no
properties Contents of the server.properties file. Supported properties are documented in the MSK Developer Guide map(string) {} no
public_access_enabled Enable public access to MSK cluster (given that all of the requirements are met) bool false no
regex_replace_chars Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.
string null no
s3_logs_bucket Name of the S3 bucket to deliver logs to string "" no
s3_logs_enabled Indicates whether you want to enable or disable streaming broker logs to S3 bool false no
s3_logs_prefix Prefix to append to the S3 folder name logs are delivered to string "" no
security_group_create_before_destroy Set true to enable terraform create_before_destroy behavior on the created security group.
We only recommend setting this false if you are importing an existing security group
that you do not want replaced and therefore need full control over its name.
Note that changing this value will always cause the security group to be replaced.
bool true no
security_group_create_timeout How long to wait for the security group to be created. string "10m" no
security_group_delete_timeout How long to retry on DependencyViolation errors during security group deletion from
lingering ENIs left by certain AWS services such as Elastic Load Balancing.
string "15m" no
security_group_description The description to assign to the created Security Group.
Warning: Changing the description causes the security group to be replaced.
string "Managed by Terraform" no
security_group_name The name to assign to the created security group. Must be unique within the VPC.
If not provided, will be derived from the null-label.context passed in.
If create_before_destroy is true, will be used as a name prefix.
list(string) [] no
security_group_rule_description The description to place on each security group rule. The %s will be replaced with the protocol name string "Allow inbound %s traffic" no
stage ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' string null no
storage_autoscaling_disable_scale_in If the value is true, scale in is disabled and the target tracking policy won't remove capacity from the scalable resource bool false no
storage_autoscaling_max_capacity Maximum size the autoscaling policy can scale storage. Defaults to broker_volume_size number null no
storage_autoscaling_target_value Percentage of storage used to trigger autoscaled storage increase number 60 no
subnet_ids Subnet IDs for Client Broker list(string) n/a yes
tags Additional tags (e.g. {'BusinessUnit': 'XYZ'}).
Neither the tag keys nor the tag values will be modified by this module.
map(string) {} no
tenant ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for string null no
vpc_id The ID of the VPC where the Security Group will be created. string n/a yes
zone_id Route53 DNS Zone ID for MSK broker hostnames string null no

Outputs

Name Description
bootstrap_brokers Comma separated list of one or more hostname:port pairs of Kafka brokers suitable to bootstrap connectivity to the Kafka cluster
bootstrap_brokers_public_sasl_iam Comma separated list of one or more DNS names (or IP addresses) and SASL IAM port pairs for public access to the Kafka cluster using SASL/IAM
bootstrap_brokers_public_sasl_scram Comma separated list of one or more DNS names (or IP addresses) and SASL SCRAM port pairs for public access to the Kafka cluster using SASL/SCRAM
bootstrap_brokers_public_tls Comma separated list of one or more DNS names (or IP addresses) and TLS port pairs for public access to the Kafka cluster using TLS
bootstrap_brokers_sasl_iam Comma separated list of one or more DNS names (or IP addresses) and SASL IAM port pairs for access to the Kafka cluster using SASL/IAM
bootstrap_brokers_sasl_scram Comma separated list of one or more DNS names (or IP addresses) and SASL SCRAM port pairs for access to the Kafka cluster using SASL/SCRAM
bootstrap_brokers_tls Comma separated list of one or more DNS names (or IP addresses) and TLS port pairs for access to the Kafka cluster using TLS
broker_endpoints List of broker endpoints
cluster_arn Amazon Resource Name (ARN) of the MSK cluster
cluster_name MSK Cluster name
config_arn Amazon Resource Name (ARN) of the MSK configuration
current_version Current version of the MSK Cluster
hostnames List of MSK Cluster broker DNS hostnames
latest_revision Latest revision of the MSK configuration
security_group_arn The ARN of the created security group
security_group_id The ID of the created security group
security_group_name n/a
storage_mode Storage mode for supported storage tiers
zookeeper_connect_string Comma separated list of one or more hostname:port pairs to connect to the Apache Zookeeper cluster
zookeeper_connect_string_tls Comma separated list of one or more hostname:port pairs to connect to the Apache Zookeeper cluster via TLS

Related Projects

Check out these related projects.

References

For additional context, refer to some of these links.

  • Terraform Standard Module Structure - HashiCorp's standard module structure is a file and directory layout we recommend for reusable modules distributed in separate repositories.
  • Terraform Module Requirements - HashiCorp's guidance on all the requirements for publishing a module. Meeting the requirements for publishing a module is extremely easy.
  • Terraform random_integer Resource - The resource random_integer generates random values from a given range, described by the min and max attributes of a given resource.
  • Terraform Version Pinning - The required_version setting can be used to constrain which versions of the Terraform CLI can be used with your configuration

Tip

Use Terraform Reference Architectures for AWS

Use Cloud Posse's ready-to-go terraform architecture blueprints for AWS to get up and running quickly.

โœ… We build it together with your team.
โœ… Your team owns everything.
โœ… 100% Open Source and backed by fanatical support.

Request Quote

๐Ÿ“š Learn More

Cloud Posse is the leading DevOps Accelerator for funded startups and enterprises.

Your team can operate like a pro today.

Ensure that your team succeeds by using Cloud Posse's proven process and turnkey blueprints. Plus, we stick around until you succeed.

Day-0: Your Foundation for Success

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Deployment Strategy. Adopt a proven deployment strategy with GitHub Actions, enabling automated, repeatable, and reliable software releases.
  • Site Reliability Engineering. Gain total visibility into your applications and services with Datadog, ensuring high availability and performance.
  • Security Baseline. Establish a secure environment from the start, with built-in governance, accountability, and comprehensive audit logs, safeguarding your operations.
  • GitOps. Empower your team to manage infrastructure changes confidently and efficiently through Pull Requests, leveraging the full power of GitHub Actions.

Request Quote

Day-2: Your Operational Mastery

  • Training. Equip your team with the knowledge and skills to confidently manage the infrastructure, ensuring long-term success and self-sufficiency.
  • Support. Benefit from a seamless communication over Slack with our experts, ensuring you have the support you need, whenever you need it.
  • Troubleshooting. Access expert assistance to quickly resolve any operational challenges, minimizing downtime and maintaining business continuity.
  • Code Reviews. Enhance your teamโ€™s code quality with our expert feedback, fostering continuous improvement and collaboration.
  • Bug Fixes. Rely on our team to troubleshoot and resolve any issues, ensuring your systems run smoothly.
  • Migration Assistance. Accelerate your migration process with our dedicated support, minimizing disruption and speeding up time-to-value.
  • Customer Workshops. Engage with our team in weekly workshops, gaining insights and strategies to continuously improve and innovate.

Request Quote

โœจ Contributing

This project is under active development, and we encourage contributions from our community.

Many thanks to our outstanding contributors:

For ๐Ÿ› bug reports & feature requests, please use the issue tracker.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Review our Code of Conduct and Contributor Guidelines.
  2. Fork the repo on GitHub
  3. Clone the project to your own machine
  4. Commit changes to your own branch
  5. Push your work back up to your fork
  6. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

๐ŸŒŽ Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

๐Ÿ“ฐ Newsletter

Sign up for our newsletter and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know. Dropped straight into your Inbox every week โ€” and usually a 5-minute read.

๐Ÿ“† Office Hours

Join us every Wednesday via Zoom for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a live Q&A that you canโ€™t find anywhere else. It's FREE for everyone!

License

License

Preamble to the Apache License, Version 2.0

Complete license is available in the LICENSE file.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.

Copyrights

Copyright ยฉ 2020-2024 Cloud Posse, LLC

README footer

Beacon

terraform-aws-msk-apache-kafka-cluster's People

Contributors

actions-user avatar aknysh avatar arcaven avatar cloudpossebot avatar dependabot[bot] avatar dmitriy-lukyanchikov avatar dougbw avatar dylanbannon avatar gkramer-gloat avatar goruha avatar hawkesn avatar hcourse-nydig avatar htplbc avatar itmustbejj avatar jalessio avatar joaoestrela avatar korenyoni avatar krainet avatar max-lobur avatar maximmi avatar nitrocode avatar osterman avatar renovate[bot] avatar rodrigorfk avatar rooty0 avatar skang0601 avatar stroobl avatar taylora433 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-msk-apache-kafka-cluster's Issues

Allow conditional DNS record creation

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

As a user, I'd want the ability to be able to conditionally create the host records from this module.

Expected Behavior

When var.zone_id is null, the hostname records should not be created.

Use Case

My organization has a different convention for DNS record naming so the module-created DNS record are unused.

Describe Ideal Solution

Add an additional clause to the hostname module to be disabled if zone_id is not specified.

input variable `security_groups` is not being utilized

Kafka brokers are only using the default security groups. Passing in variable security_groups is not being used.

main.tf

resource "aws_msk_cluster" "default" {
  count                  = module.this.enabled ? 1 : 0
  cluster_name           = module.this.id
  kafka_version          = var.kafka_version
  number_of_broker_nodes = var.number_of_broker_nodes
  enhanced_monitoring    = var.enhanced_monitoring

  broker_node_group_info {
    instance_type   = var.broker_instance_type
    ebs_volume_size = var.broker_volume_size
    client_subnets  = var.subnet_ids
    security_groups = aws_security_group.default.*.id
  }
}

Line security_groups = aws_security_group.default.*.id does not add variable security_groups. Maybe replace with security_groups = concat(aws_security_group.default.*.id, var.security_groups)

Rolling update cluster when instance type is changed

Describe the Feature

MSK cluster gets completely recreated (destroyed and then created) when instance type is changed.

Expected Behavior

I would expect the cluster to be rolling-updated, without downtime (assuming cluster has the proper size, 3 brokers, 3 AZ, partitions replicated).
AWS supports this feature, and it works either with the CLI and the console.
https://aws.amazon.com/about-aws/whats-new/2021/01/amazon-msk-now-supports-the-ability-to-change-the-size-or-family/?nc1=h_ls

Alternatives Considered

Edit brokers in AWS console. But that is something I would like to avoid.

Invalid index error when changing `number_of_broker_nodes` variable

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Invalid index error when changing number_of_broker_nodes variable from 2 to 4. (The # of AZ's is 2 instead of 3 like the example but I'm guessing this is not the underlying problem).

Error: Invalid index

  on .terraform/modules/msk_cluster/main.tf line 152, in module "hostname":
 152:   records = [split(":", local.bootstrap_brokers_combined_list[count.index])[0]]
    |----------------
    | count.index is 2
    | local.bootstrap_brokers_combined_list is list of string with 2 elements

The given key does not identify an element in this collection value.


Error: Invalid index

  on .terraform/modules/msk_cluster/main.tf line 152, in module "hostname":
 152:   records = [split(":", local.bootstrap_brokers_combined_list[count.index])[0]]
    |----------------
    | count.index is 3
    | local.bootstrap_brokers_combined_list is list of string with 2 elements

The given key does not identify an element in this collection value.

Releasing state lock. This may take a few moments...

Expected Behavior

The plan/apply should succeed and additional nodes added to the internal "hostname" module.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Have a config is similar to the complete/example but with 2 AZ's instead of 3. Set number_of_broker_nodes to 2.
  2. terraform apply
  3. Change number_of_broker_nodes from 2 to 4. Run plan.
  4. terraform plan
  5. See error

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Terraform v0.14.7
+ provider registry.terraform.io/hashicorp/aws v3.33.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.0.3
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

Additional Context

Add any other context about the problem here.

Invalid required aws provider version

Describe the Bug

This output use the attribute storage_mode:

output "storage_mode" {
  value       = one(aws_msk_cluster.default[*].storage_mode)
  description = "Storage mode for supported storage tiers"
}

The problem is that AWS provider was only introduced in version 4.40.0, but in versions.tf (and in README) is accepted versions >= 4.0:

required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.0"
    }
  }

When a project set AWS provider version to 4.39.0 os less, terraform throws:

Error: Unsupported attribute
โ”‚
โ”‚   on .terraform/modules/kafka/outputs.tf line 12, in output "storage_mode":
โ”‚   12:   value       = one(aws_msk_cluster.default[*].storage_mode)
โ”‚
โ”‚ This object does not have an attribute named "storage_mode".

Expected Behavior

Set correct required version for aws provider to terraform handle compatibility.

Steps to Reproduce

Create a file main.tf and put this:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.35.0"
    }
  }
}

module "kafka" {
  source = "cloudposse/msk-apache-kafka-cluster/aws"

  kafka_version        = "3.3.2"
  namespace            = "eg"
  stage                = "prod"
  name                 = "app"
  vpc_id               = "vpc-XXXXXXXX"
  subnet_ids           = ["subnet-XXXXXXXXX", "subnet-YYYYYYYY"]
  broker_per_zone      = 2
  broker_instance_type = "kafka.m5.large"

  # A list of IDs of Security Groups to associate the created resource with, in addition to the created security group
  associated_security_group_ids = ["sg-XXXXXXXXX", "sg-YYYYYYYY"]
  
  # A list of IDs of Security Groups to allow access to the cluster
  allowed_security_group_ids = ["sg-XXXXXXXXX", "sg-YYYYYYYY"]
}

run:

$ terraform init
$ terraform plan

Screenshots

No response

Environment

  • Terraform: 1.5.2
  • AWS Provider: 4.35.0
  • Module version: 2.3.0

Additional Context

No response

Outputs of broker lists creating perpetual diff on MSK clusters with more than 1 broker per AZ

Describe the Bug

On versions 1.2.0/ 1.3.0 / 1.4.0. Newer versions introduce breaking changes so we have yet to upgrade.

Using outputs such as module.msk-apache-kafka-cluster.bootstrap_brokers_tls, we are seeing a perpetual diff on TF runs on MSK clusters with more than 1 broker per AZ.

~ bootstrap_brokers_tls_list = [
- "b-1.msk...:9094",
"b-2.msk...:9094",
- "b-3.msk...:9094",
+ "b-5.msk...:9094",
+ "b-6.msk...:9094",

It seems that every TF run randomly selects 3 brokers for this output (and other similar broker list outputs), which creates a diff on each TF run.

Is this expected behaviour? As it creates cascading diffs on environments using this output. Is there a workaround to have this perpetual diff stop happening?

Expected Behavior

Ideally I assume we would want all brokers to appear in the output, or at least get the first 3 brokers so we would not have a perpetual diff

Steps to Reproduce

Creates a cluster with more than 3 brokers / with 3 AZs and more than 1 broker per AZ, and set this output.

Screenshots

No response

Environment

No response

Additional Context

No response

Error Connecting to Service Discovery Domains

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When connecting to the MSK cluster via the service discovery Route53 records created by this module (via cloudposse/route53-cluster-hostname/aws in this module) rather than the AWS-managed Route53 records, the following error occurs on the client side:

 Caused by: KafkaJSConnectionError: Connection error: Hostname/IP does not match certificate's altnames: Host: msk-sandbox-broker-3.sandbox.mgmt.REDACTED.net. is not in the cert's altnames: DNS:*.REDACTED-mgmt-uw2-sandbox-m.REDACTED.c3.kafka.us-west-2.amazonaws.com

Expected Behavior

A clear and concise description of what you expected to happen.

Connecting to the module's service discovery records should be possible.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Initiate a Kafka client to the hostname output of this module
  2. See error

Screenshots

If applicable, add screenshots or logs to help explain your problem.

N/A

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: N/A (not dependant on OS)
  • Version 0.8.4

Additional Context

It's possible that setting advertised.listeners might resolve this issue. See:

If this is the case, then advertised.listeners with the value of module.hostname.*.hostname (plus whatever formatting is necessary for the Kafka configuration property) should be merged into server_properties here:

resource "aws_msk_configuration" "config" {
count = local.enabled ? 1 : 0
kafka_versions = [var.kafka_version]
name = module.this.id
description = "Manages an Amazon Managed Streaming for Kafka configuration"
server_properties = join("\n", [for k in keys(var.properties) : format("%s = %s", k, var.properties[k])])
}

IAM-based client authentication

Describe the Feature

AWS have released IAM-based SASL security for MSK. It'd be great if we could configure support for this via the Cloudposse module.

Expected Behavior

Users should be able to specify IAM-based SASL security for clients as an alternative to SASL-SCRAM.

Use Case

We're currently looking at deploying an MSK cluster and would like the ability to configure client access via IAM instead of via SCRAM credentials in Secrets Manager.

Describe Ideal Solution

An option in the Cloudposse module to enable IAM-based SASL security that calls CreateCluster with the correct SASL options to enable IAM-based security.

Alternatives Considered

SASL-SCRAM is an existing client auth option but there's currently no way to update a SCRAM cluster to use IAM-based client security. IAM-based client security would add value for users that have a unified IAM-based security architecture.

Additional Context

Happy to raise a PR for this although I am (like everyone) tight for time and not a tf expert.

Version 2.4.0 tries to downscale storage back to original value

Describe the Bug

We have a cluster created with

  broker_volume_size            = 1000
...
  storage_autoscaling_max_capacity = 4000

and storage has autoscaled to 2662.

When upgrading to module version 2.4.0, the Terraform plan includes the following:

broker_node_group_info {
  storage_info {
    ebs_storage_info {
      volume_size : 2662 -> 1000
    }
}

This looks like it will downscale the storage back to the original size. When upgrading to version 2.3.1 we don't see this behavior.

Expected Behavior

TF plan should not include a change to volume_size just by upgrading module version.

Steps to Reproduce

I haven't attempted to reproduce it outside of our existing cluster, but I think this would work: Using a module version 2.3.1 or less, use storage_autoscaling_max_capacity greater than the broker_volume_size, create the cluster, then get storage to autoscale. Then upgrade the module to 2.4.0.

Screenshots

No response

Environment

  • Terraform v1.3.2 (Terraform Cloud)
  • upgrading module version from 0.8.4 to 2.4.0 causes the issue, but upgrading from 0.8.4 to 2.3.1 doesn't

Additional Context

No response

Only create `aws_msk_configuration` if `parameters` are supplied

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Only create aws_msk_configuration if parameters are supplied otherwise an empty configuration would be created and associated to the msk cluster

resource "aws_msk_configuration" "config" {
count = local.enabled ? 1 : 0
kafka_versions = [var.kafka_version]
name = module.this.id
description = "Manages an Amazon Managed Streaming for Kafka configuration"
server_properties = join("\n", [for k in keys(var.properties) : format("%s = %s", k, var.properties[k])])
}

configuration_info is optional for msk clusters so a configuration doesn't need to be created

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/msk_cluster#configuration_info

MSK cluster to get recreated when sasl is reverted to null

Describe the Bug

In a certain case the msk config has a
tls block with enabled: false whilst also having sasl/scram
eg. output from aws kafka list-clusters

            "ClientAuthentication": {
                "Sasl": {
                    "Scram": {
                        "Enabled": true
                    },
                    "Iam": {
                        "Enabled": true
                    }
                },
                "Tls": {
                    "CertificateAuthorityArnList": [],
                    "Enabled": false
                },
                "Unauthenticated": {
                    "Enabled": false
                }
            },

When running a terraform plan it wants to recreate the cluster

   module.kafka.aws_msk_cluster.default[0] must be replaced
-/+ resource "aws_msk_cluster" "default" {
      ~ client_authentication {
          - tls { # forces replacement
              - certificate_authority_arns = [] -> null
            }
            # (1 unchanged block hidden)
        }

As a workaround i added a ignore changes to the aws_msk_cluster resource

  lifecycle {
    ignore_changes = [
      # Ignore changes to ebs_volume_size in favor of autoscaling policy
      broker_node_group_info[0].ebs_volume_size,
      client_authentication[0].tls
    ]
  }

Expected Behavior

I expect this to not replace the cluster as there is really no change

Steps to Reproduce

See above description.
I believe if you enable unauthenticated access via console and disable the cluster can have the extra info in the config

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: OSX
  • Version 12.1 monterey

Additional Context

Add any other context about the problem here.

exemple need to upgrade dynamic-subnets to 0.39.8 , otherwise it fails with Call to function "map" failed: the "map" function was deprecated....

$ terraform validate        
โ•ท
โ”‚ Error: Error in function call
โ”‚ 
โ”‚   on .terraform/modules/subnets/private.tf line 8, in module "private_label":
โ”‚    8:     map(var.subnet_type_tag_key, format(var.subnet_type_tag_value_format, "private"))
โ”‚     โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
โ”‚     โ”‚ var.subnet_type_tag_key is a string, known only after apply
โ”‚     โ”‚ var.subnet_type_tag_value_format is a string, known only after apply
โ”‚ 
โ”‚ Call to function "map" failed: the "map" function was deprecated in Terraform v0.12 and is no longer available; use tomap({ ... }) syntax to write a literal map.
โ•ต
โ•ท
โ”‚ Error: Error in function call
โ”‚ 
โ”‚   on .terraform/modules/subnets/public.tf line 8, in module "public_label":
โ”‚    8:     map(var.subnet_type_tag_key, format(var.subnet_type_tag_value_format, "public"))
โ”‚     โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
โ”‚     โ”‚ var.subnet_type_tag_key is a string, known only after apply
โ”‚     โ”‚ var.subnet_type_tag_value_format is a string, known only after apply
โ”‚ 
โ”‚ Call to function "map" failed: the "map" function was deprecated in Terraform v0.12 and is no longer available; use tomap({ ... }) syntax to write a literal map.
โ•ต
module "dynamic-subnets" {
  source  = "cloudposse/dynamic-subnets/aws"
  version = "0.39.8"
  # insert the 16 required variables here
}

Client TLS auth and SASL auth cannot be enabled at the same time

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Currently for AWS MSK , the aws console allows to enable unauthenticated access , IAM based authentication as well as SASL based auth from the console and we can choose all three at the same time, thus exposing endpoints at 9094, 9096 and 9098.
When I use v.0.8.1 of this module and pass the following flags
client_sasl_iam_enabled = true
client_sasl_scram_enabled = true
client_tls_auth_enabled = true

I get an error
Error: Conflicting configuration arguments

on .terraform/modules/kafka/main.tf line 104, in resource "aws_msk_cluster" "default":
104: resource "aws_msk_cluster" "default" {

"client_authentication.0.sasl": conflicts with client_authentication.0.tls

Expected Behavior

The module should allow all the three flags to be set as is allowed from the AWS console.

Steps to Reproduce

Steps to reproduce the behavior:

  1. use the terraform module to create the msk cluster with all the three flags set to true.
  2. The module will throw an error.
  3. Go to aws console and enable all the 3 flags for a cluster and it will work.

Environment (please complete the following information):

we are using terraform version v0.14.8, terragrunt v0.28.14 and aws provider version 3.67.0

Error: creating Security Group: InvalidVpcID.NotFound: The vpc ID does not exist

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When applying the terraform module, the module doesn't find or recognize the VPC ID, even though it's there

Expected Behavior

Security group is created

Steps to Reproduce

Steps to reproduce the behavior:
I've created a company module specifically from it w/ this code:

module "msk-apache-kafka-cluster" {
  source  = "cloudposse/msk-apache-kafka-cluster/aws"
  version = "1.1.1"
  
  name = var.name
  vpc_id = var.vpc_id
  subnet_ids = var.subnet_ids
  kafka_version = var.kafka_version
  associated_security_group_ids = var.associated_security_group_ids
  broker_instance_type = var.broker_instance_type
  broker_per_zone = var.broker_per_zone
}
variables.tf
variable "name" {
  type = string
}

variable "vpc_id" {
  type = string
}

variable "subnet_ids" {
  type = list(string)
}

variable "kafka_version" {
  type = string
}

variable "associated_security_group_ids" {
  type = list(string)
}

variable "broker_instance_type" {
  type = string
}

variable "broker_per_zone" {
  type = number
}```
As I am using terragrunt to invoke the tf-module, my terragrunt.hcl looks thusly:
```terraform {
  source = "[email protected]:myplace/terraform-modules.git//snowplow?ref=snowplow/v1.1.0"
}

dependency "vpc" {
  config_path = "../vpc"
}

inputs = {
    name = "myplace-snowplow-test"
    vpc_id = dependency.vpc.outputs.vpc_id
    subnet_ids = dependency.vpc.outputs.private_subnets
    kafka_version = "2.8.1"
    associated_security_group_ids = [dependency.vpc.outputs.default_sg_id]
    broker_instance_type = "kafka.t3.small"
    broker_per_zone = 1
}```
It has a dependency of the outputs of the vpc module that show up like this:
```azs = tolist([
  "us-east-1a",
  "us-east-1b",
  "us-east-1c",
])
cgw_ids = []
database_subnets = [
  "subnet-05ace04da69c0a5c3",
  "subnet-03f094702e6413a5c",
  "subnet-0e29e3ea07b3161bd",
]
default_sg_id = "sg-019db31e6084d695b"
intra_subnets = [
  "subnet-02fa8b695b63f36be",
  "subnet-068a94b0fcb72c6bf",
  "subnet-0edb9a2c27f57b067",
]
nat_public_ips = tolist([
  "3.231.112.27",
])
private_subnets = [
  "subnet-047d998dd1bb4e300",
  "subnet-02627f60507ea09fb",
  "subnet-00ffed109a79644da",
]
public_subnets = [
  "subnet-0b82cf0a6e280600a",
  "subnet-0512c45da9cac36f2",
  "subnet-0588f61d9b5307245",
]
this_customer_gateway = {}
vpc_cidr = "10.2.0.0/16"
vpc_id = "vpc-0adb2021bba46a1c5"```
When I try to run the snowplow module however, I'm getting the following error:
```Error: creating Security Group (myplace-snowplow-test): InvalidVpcID.NotFound: The vpc ID 'vpc-0adb2021bba46a1c5' does not exist
โ”‚ 	status code: 400, request id: 3f7c6a9c-0025-4baa-8345-f44496a95c7f
โ”‚
โ”‚   with module.msk-apache-kafka-cluster.module.broker_security_group.aws_security_group.default[0],
โ”‚   on .terraform/modules/msk-apache-kafka-cluster.broker_security_group/main.tf line 24, in resource "aws_security_group" "default":
โ”‚   24: resource "aws_security_group" "default" {```
That vpc exists (per the outputs above), and it's in the console as well.  Even when I hardcode that variable in the terragrunt.hcl, it gives the same error.


## Environment (please complete the following information):

MacOS: Ventura
Terraform Version: 1.0.2

lifecycle prevent_destroy option

Describe the Feature

Currently there's no way to prevent destruction of the MSK cluster/configuration if a terraform destory is run, or someone accidentally changes something that would force a rebuild. We can't set prevent_destroy on the module as a whole due to terraform limitations, so being able to flip a boolean that default to false to true that would set prevent_destroy would be super useful.

Expected Behavior

A boolean input should be added, prevent_destroy, that defaults to false and is used to set prevent_destroy on the MSK cluster and configuration at least.

MSK Multi-VPC Connectivity and Cluster Policy

Describe the Feature

MSK supports multi-vpc and cluster policy, terraform is updated to have resources for these, but the cloudposse module doesn't have it yet.

Expected Behavior

Support multi-vpc connectivity and cluster policy resources

Use Case

Having other AWS accounts able to connect to an MSK cluster.

Describe Ideal Solution

Include resources for multi vpc connectivity and cluster policy

Alternatives Considered

vanilla terraform

Additional Context

No response

Enabling IAM and SCRAM authentication at the same time fails

Describe the Bug

When enabling both client_sasl_iam_enabled and client_sasl_scram_enabled an Error: Too many sasl blocks is being produced.

Expected Behavior

Having multiple authentication is supported by AWS MSK and also by the AWS Provider, however the terraform-aws-msk-apache-kafka-cluster module is not working when enabling multiple SASL authentication at the same time.

Steps to Reproduce

  1. Create a simple module consuming the terraform-aws-msk-apache-kafka-cluster module and set client_sasl_iam_enabled and client_sasl_scram_enabled to true, proper providing an AWS SecretManager Secret ARN on the client_sasl_scram_secret_association_arns variable.
  2. Run terraform init
  3. Run terraform plan
  4. The mentioned Error: Too many sasl blocks is going to be produced.

Screenshots

image

Environment (please complete the following information):

  • OS: OSX
  • Module Version: 0.7.3
  • Terraform Version: 1.0.2
  • AWS Provider Version: 3.63.0

Behaviour when SASL IAM and SCRAM enabled together

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When both iam and scram are enabled together , it seems that existing msk cluster (with no auth) is destroyed first and then a new cluster is created that has these properties enabled , which is different from what is available from the aws console.

Expected Behaviour

In AWS console, we can enable the security settings (IAM and SCRAM) on an existing cluster and the update happens fine without deleting the cluster first. The existing cluster should not be deleted.

Steps to Reproduce

Steps to reproduce the behaviour:

  1. Start an MSK cluster with no auth using terraform module and get it in the Active state.
  2. Enable IAM and SCRAM by setting the properties true (client_sasl_iam_enabled and client_sasl_scram_enabled)
  3. terraform apply and then it will try to delete the existing cluster

Additional Context

  • the cluster with no auth was created using v.0.6.0 and for IAM and SCRAM we upgraded to latest version of kafka module.
  • we are using terraform version v0.14.8, terragrunt v0.28.14 and aws provider version 3.67.0

Cluster gets recreated(deleted then created) for the latest version v2.3.0

Describe the Bug

When trying to upgrade kafka module to current latest version, it has an unexpected behaviour. It deletes the current kafka cluster, and try to recreate it. Also, the Security Group gets deleted and recreated with same rules. The only difference is that it changes its name from "default" to "cbd":

module.test-kafka-cluster.module.msk_cluster.module.security_group.aws_security_group.cbd[0] will be created

  • resource "aws_security_group" "cbd" {
    • arn = (known after apply)
    • description = "Managed by Terraform"
    • egress = (known after apply)
    • id = (known after apply)
    • ingress = (known after apply)
    • name = (known after apply)
    • name_prefix = (known after apply)
    • owner_id = (known after apply)
    • revoke_rules_on_delete = false
    • tags = {
      • "Environment" = "test-kafka-upgrade"
      • "ManagedBy" = "Terraform"
      • "Name" = "devops-test-kafka-upgrade"
      • "Namespace" = "devops"
      • "Team" = "devops"
        }
    • tags_all = {
      • "Environment" = "test-kafka-upgrade"
      • "ManagedBy" = "Terraform"
      • "Name" = "devops-test-kafka-upgrade"
      • "Namespace" = "devops"
      • "Team" = "devops"
        }

module.test-kafka-cluster.module.msk_cluster.module.security_group.random_id.rule_change_forces_new_security_group[0] will be created

  • resource "random_id" "rule_change_forces_new_security_group" {
    • b64_std = (known after apply)
    • b64_url = (known after apply)
    • byte_length = 3
    • dec = (known after apply)
    • hex = (known after apply)
    • id = (known after apply)

module.test-kafka-cluster.module.msk_cluster.module.broker_security_group.aws_security_group.default[0] will be destroyed

(because aws_security_group.default is not in configuration)

  • resource "aws_security_group" "default" {
    • arn = "arn:aws:ec2:AWS-AZ:AWS-ACCOUNT-ID:security-group/sg-SG-ID" -> null

    • description = "MSK broker access" -> null

      module.test-kafka-cluster.module.msk_cluster.aws_msk_cluster.default[0] must be replaced

-/+ resource "aws_msk_cluster" "default" {

          ~ broker_node_group_info {
      ~ ebs_volume_size = 20 -> (known after apply)
      ~ security_groups = [
          - "sg-0e1e8b46171217159",
        ] -> (known after apply) # forces replacement
        # (3 unchanged attributes hidden)
        # (2 unchanged blocks hidden)
    }

Expected Behavior

Upgrade should've been smooth, and the attempt of deleting and creating again the kafka cluster should never happen.

Steps to Reproduce

Tried to upgrade from kafka module from version v1.3.1 to v2.3.0.

Screenshots

No response

Environment

No response

Additional Context

No response

Protocols used in allowed_cidr_blocks don't include SASL/SCRAM public cluster

Describe the Bug

When using the module, if you specify a list of allowed_cidr_blocks, a security group is created and a rule is added for each combination of cidr block and every one of the protocols defined here:

The list does not contain 9196, which is the port used with SASL/SCRAM when having a publicly accessible cluster. Attached is an example image.

Screenshot 2023-08-01 at 15 31 10

Expected Behavior

Access to 9196 should also be whitelisted for the cidr_blocks specified.

Steps to Reproduce

Create a cluster and then make it publicly accessible by satisfying first the requirements listed here: https://docs.aws.amazon.com/msk/latest/developerguide/public-access.html

Screenshots

No response

Environment

No response

Additional Context

No response

Broker DNS hostname are not comma separated, instead they are printed as one complete string.

Found a bug? Maybe our Slack Community can help.

Slack Community

Broker DNS hostnames are not comma-separated, instead, they are printed as one complete string.

For cloud AWS MSK, the output of broker DNS hostname getting printed as one big string without any comma-separation.
I'm building three AWS brokers.

example:
"hostname": "sample-broker-1-msk.domain.comsample-broker-2-msk.domain.comsample-broker-3-msk.domain.com"

Expected Behavior

As an expected behavior is
"hostname": "sample-broker-1-msk.domain.com,sample-broker-2-msk.domain.com,sample-broker-3-msk.domain.com"

server properties change does not apply latest revision in the MSK cluster only creates a new revision

Describe the Bug

When we create a new configuration by adding some parameters to the server_properties, the new configuration is created as a new revision in AWS but the same is not applied to the MSK cluster already launched. This is similar to what is reported here.

Expected Behavior

When adding / removing configuration from server properties, the new configuration is created as a new revision but the same is not automatically applied on the MSK cluster.

Steps to Reproduce

Modify properties that are being passed here. Plan the changes and apply. The new revision of the configuration is created but the same is not applied on the MSK cluster.

Screenshots

No response

Environment

No response

Additional Context

No response

unable to load module from registry

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Unable to load module from terraform registry.

Expected Behavior

User should be able to load module from the terraform registry as instructed in the readme.

Steps to Reproduce

Steps to reproduce the behavior:

  1. git clone https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster
  2. cd examples/complete
  3. Edit 'main.tf' -> 'module "kafka"' on line 33 - update source to download from terraform - see below.
  # source = "../../"
  source = "cloudposse/apache-kafka-cluster/aws"
  # Cloud Posse recommends pinning every module to a specific version
  version = "0.6.3"
  1. Run terraform init
  2. See error
Error: Module not found
โ”‚ 
โ”‚ Module "kafka" (from main.tf:33) cannot be found in the module registry at registry.terraform.io.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: [OSX]

Additional Context

I wanted to evaluate this module for my internal use case, however the module runs into an issue while downloading from terraform registry. https://registry.terraform.io/modules/cloudposse/msk-apache-kafka-cluster/aws/latest

broker storage: BadRequestException: The request does not include any updates to the EBS volumes of the cluster

Describe the Bug

Changing broker volume size along with autoscaling max capacity and target values caused terraform to end up in a state where it is permanently throwing en error during run and is trying to remove section with provisioned throughput from broker config without any changes of the code.

Error: updating MSK Cluster (arn:aws:kafka) broker storage: BadRequestException: The request does not include any updates to the EBS volumes of the cluster. Verify the request, then try again. { RespMetadata: { StatusCode: 400, RequestID: "" }, Message: "The request does not include any updates to the EBS volumes of the cluster. Verify the request, then try again." }
with module.kafka_prod.module.kafka.aws_msk_cluster.default[0]
on .terraform/modules/kafka_prod.kafka/main.tf line 124, in resource "aws_msk_cluster" "default":
resource "aws_msk_cluster" "default" {

Screenshot from 2023-11-21 10-51-23

I suspect this MIGHT have something to do with the fact that volume_size is being ignored in the lifecycle policy here:


but this might be incorrect assumption.

Expected Behavior

no changes are expected

Steps to Reproduce

hard to pin point, but I was increasing storage on the UI and then catching up with changes in terraform, maybe this caused some differences?

Screenshots

No response

Environment

Linux
module version 2.0.0, also tried 2.3.0
Terraform version 1.5.3, also tested on 1.6.4

Additional Context

No response

Please update the module to use the latest cloudposse / terraform-aws-security-group tag

Hello,

When trying to expand the allowed_cidr_blocks variable with another CIDR i get the bellow error :

Error: [WARN] A duplicate Security Group rule was found on (sg-0da51b4f2466544ea). This may be โ”‚ a side effect of a now-fixed Terraform issue causing two security groups with โ”‚ identical attributes but different source_security_group_ids to overwrite each โ”‚ other in the state. See https://github.com/hashicorp/terraform/pull/2376 for more โ”‚ information and instructions for recovery. Error: InvalidPermission.Duplicate: the specified rule "peer: 172.25.0.0/16, TCP, from port: 9094, to port: 9094, ALLOW" already exists โ”‚ status code: 400, request id: c0f3cfd9-7fe6-4e78-8fdf-08cc751d93f2 โ”‚ โ”‚ with module.kafka.module.msk_cluster.module.broker_security_group.aws_security_group_rule.keyed["_m[0]#tls#cidr"], โ”‚ on .terraform/modules/kafka.msk_cluster.broker_security_group/main.tf line 141, in resource "aws_security_group_rule" "keyed": โ”‚ 141: resource "aws_security_group_rule" "keyed" {

The only way to move forward atm is to go into the submodule through the .terraform folder and remove the lifecycle blocks for the 1.0.1 version :

lifecycle { create_before_destroy = true }

Please bump the MSK module to use the latest version of the SG module.

Thanks,
Adrian

Cluster gets recreated on enabling open monitoring

Describe the Bug

Im trying to enable open monitoring on my cluster, current version of module is 2.3

  node_exporter_enabled = true

but it gets recreated because of security group recreation

Expected Behavior

open monitoring should be enabled and security group rules should be created

Steps to Reproduce

Create Kafka cluster and after it is created, enable open monitoring

Screenshots

image

Environment

No response

Additional Context

No response

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Base branch does not exist - skipping

Edited/Blocked

These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.

Detected dependencies

terraform
main.tf
  • cloudposse/route53-cluster-hostname/aws 0.13.0
  • cloudposse/security-group/aws 2.1.0
versions.tf
  • aws >= 4.0
  • hashicorp/terraform >= 1.0.0

  • Check this box to trigger a request for Renovate to run again on this repository

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.