Giter Club home page Giter Club logo

terraform-aws-cloudtrail's Issues

Outputs reference variables instead of resources created by this module

Output:

output "iam_role_name" {
value = var.iam_role_name
description = "The IAM Role name"
}

Variable (referenced in output):

variable "iam_role_name" {
type = string
default = ""
description = "The IAM role name. Required to match with iam_role_arn if use_existing_iam_role is set to true"
}

Usage:

module "aws_cloudtrail" {
  source  = "lacework/cloudtrail/aws"
  version = "~> 0.1"

  use_existing_iam_role   = false
  use_existing_cloudtrail = true
  bucket_name             = aws_s3_bucket.global_cloudtrail_storage.id
  bucket_arn              = aws_s3_bucket.global_cloudtrail_storage.arn
  bucket_sse_key_arn      = aws_kms_key.cloudtrail_key.arn
}

data "aws_iam_policy_document" "lacework_kms_decrypt" {
  statement {
    effect = "Allow"
    actions = [
      "kms:Decrypt"
    ]
    resources = [aws_kms_key.cloudtrail_key.arn]
  }
}

resource "aws_iam_policy" "lacework_kms_policy" {
  name        = "lacework-kms-decryption"
  path        = "/"
  description = "Supplimental policy to allow lacework to decrypt log results"
  policy      = data.aws_iam_policy_document.lacework_kms_decrypt.json
}

resource "aws_iam_policy_attachment" "lacework_kms_attach" {
  name       = "lacework-kms-decryption"
  roles      = [module.aws_cloudtrail.iam_role_name]
  policy_arn = aws_iam_policy.lacework_kms_policy.arn
}

The block of code above returns a null value for module.aws_cloudtrail.iam_role_name. We need to attach a kms decrypt policy to allow the iam role created by this module, however the outputs of this module do not reference the resources it creates because the outputs are simply the inputs.

Terraform Plan:

Terraform will perform the following actions:

  # aws_iam_policy_attachment.lacework_kms_attach will be created
  + resource "aws_iam_policy_attachment" "lacework_kms_attach" {
      + id         = (known after apply)
      + name       = "lacework-kms-decryption"
      + policy_arn = "arn:aws:iam::123456789012:policy/lacework-kms-decryption"
      + roles      = [
          + "",
        ]
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Terraform Apply:

undefinedInitializing plugins and modules...
aws_iam_policy_attachment.lacework_kms_attach: Creating...

Error: No Users, Roles, or Groups specified for IAM Policy Attachment lacework-kms-decryption

  on lacework.tf line 29, in resource "aws_iam_policy_attachment" "lacework_kms_attach":
  29: resource "aws_iam_policy_attachment" "lacework_kms_attach" {

bug: support aws provider version ~> 4.0

Describe the bug
This terraform module requires the aws provider to be installed in the latest 3.X version.
required_providers { aws = "~> 3.64" }
This leads to the error if a customer wants to integrate this module in a codebase using aws provider version 4.X:
image

Provider tree:
image

Steps to reproduce
Create a codebase containing the aws provider version 4.X, add this module and try to run terrform init

Expected behavior
This module should support the latest major version of the aws terraform provider.

Please complete the following information):

  • Terraform Version: v1.0.11
  • Module Version 1.1.0

feat: enforce lacework-global-85 policies

Feature Request

Describe the Feature Request
The following policy specifies how CloudTrail should be set up. Please enforce this guidance, specifically the MultiRegion aspect within this module.

Is your feature request related to a problem? Please describe
n/a

Describe Preferred Solution
Add a property for enabling multi-region trails.

Additional Context
n/a

feat: CloudTrail log bucket denies HTTP requests

Feature Request

Describe the Feature Request
Bucket Policy for CloudTrail log bucket should deny HTTP requests by default.

Is your feature request related to a problem? Please describe
The ISO 27001 report that's generated by Lacework complains about Lacework's own access logs bucket allowing HTTP requests, flagging it as a vulnerability of medium severity.

Describe Preferred Solution
Add policy to access logs bucket, denying HTTP requests.

Additional Context
N/A

feat: Using a custom KMS CMK

Feature Request

Describe the Feature Request
In order to deploy New Consolidated CloudTrail and Configuration Assessment in subaccounts, we need to create our own KMS key and use it for encryption/decryption in all accounts.
However, the create_kms_key condition doesn't support this use case

Is your feature request related to a problem? Please describe
We want to configure the cloudtrail module this way:

  • use IAM resource from "lacework/config/aws" module
  • use our own KMS CMK
module "aws_cloudtrail" {
  source  = "lacework/cloudtrail/aws"
  version = "~> 2.0.0"

  consolidated_trail           = true
  use_existing_iam_role        = true
  iam_role_name                = module.aws_config.iam_role_name
  iam_role_arn                 = module.aws_config.iam_role_arn
  iam_role_external_id         = module.aws_config.external_id
  external_id_length           = 1000
  prefix                       = "lacework-integration"
  bucket_name                  = "lacework-cloudtrail"
  log_bucket_name              = "lacework-cloudtrail-access-logs"
  bucket_sse_key_arn           = aws_kms_key.this.key_id
  sns_topic_name               = "lacework"
  sns_topic_encryption_key_arn = aws_kms_key.this.key_id
  sqs_queue_name               = "lacework"
  sqs_encryption_key_arn       = aws_kms_key.this.key_id
  cloudtrail_name              = "lacework"
  lacework_integration_name    = "TF cloudtrail"
  kms_key_rotation             = true
}
│ Error: Invalid count argument

│   on .terraform/modules/aws_cloudtrail/main.tf line 36, in resource "aws_kms_key" "lacework_kms_key":
│   36:   count                   = local.create_kms_key

│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources
│ that the count depends on.

feat: Allow to filter SNS messages

Feature Request

Describe the Feature Request
Allow filtering events send through SNS

In my particular case, I want to filter from which accounts we are sending data to Lacework by creating an SNS topic with filters and an SQS queue reusing an existing S3 bucket

  • We have an Organisational Cloudtrail created on each account
  • The parent/root account owns the Organisational Cloudtrail
  • Organisation Cloudtrail puts logs into an S3 bucket that is hosted in a different account, our security account

Is your feature request related to a problem? Please describe
At the moment we are using an integration per account but that increases our AWS cost since duplicated AWS Cloudtrail events are not free
We can't ingest all the Cloudtrail events because our contract is limited to a certain number of resources

Describe Preferred Solution
The most recent AWS provider allows setting filter_policy and filter_policy for sns_topic_subscription
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sns_topic_subscription#filter_policy

Additional Context

bug: Error when using use_existing_access_log_bucket = true

Describe the bug
With module "lacework/cloudtrail/aws" version 1.1.0 I use use_existing_access_log_bucket = true as I have a central logging bucket for S3 access logs.

After upgrading the module to 2.3.1 I get this error message from terraform during plan:

│ Error: Invalid index
│
│   on .terraform/modules/aws_cloudtrail/main.tf line 76, in resource "aws_s3_bucket_logging" "cloudtrail_bucket_logging":
│   76:   target_bucket = aws_s3_bucket.cloudtrail_log_bucket[0].id
│     ├────────────────
│     │ aws_s3_bucket.cloudtrail_log_bucket is empty tuple
│
│ The given key does not identify an element in this collection value: the collection has no elements.

Looking at line 72 to 78 in .terraform/modules/aws_cloudtrail/main.tf I see that the case use_existing_access_log_bucket = true is not handled here. The code just assumes the access logging bucket was also created within its context and therefor fails.

I locally fixed it by changing the code from

// v4 s3 bucket changes
resource "aws_s3_bucket_logging" "cloudtrail_bucket_logging" {
  count         = var.bucket_logs_enabled && !var.use_existing_cloudtrail ? 1 : 0
  bucket        = aws_s3_bucket.cloudtrail_bucket[0].id
  target_bucket = aws_s3_bucket.cloudtrail_log_bucket[0].id
  target_prefix = var.access_log_prefix
}

to

// v4 s3 bucket changes
resource "aws_s3_bucket_logging" "cloudtrail_bucket_logging" {
  count         = var.bucket_logs_enabled && !var.use_existing_cloudtrail ? 1 : 0
  bucket        = aws_s3_bucket.cloudtrail_bucket[0].id
  target_bucket = var.use_existing_access_log_bucket ? local.log_bucket_name : aws_s3_bucket.cloudtrail_log_bucket[0].id # changed line
  target_prefix = var.access_log_prefix
}

Steps to reproduce
use use_existing_access_log_bucket = true in the module aws_cloudtrail

Expected behavior
Usage of already existing S3 buckets by setting use_existing_access_log_bucket = true should work.

Please complete the following information):
Terraform v1.3.5
on linux_amd64

  • provider registry.terraform.io/hashicorp/archive v2.2.0
  • provider registry.terraform.io/hashicorp/aws v4.41.0
  • provider registry.terraform.io/hashicorp/random v3.4.3
  • provider registry.terraform.io/hashicorp/template v2.2.0
  • provider registry.terraform.io/hashicorp/time v0.9.1
  • provider registry.terraform.io/lacework/lacework v1.1.0

Additional context
Add any other context about the problem here.

feat: Lacework CloudTrail should send logs to CloudWatch

Feature Request

Describe the Feature Request
The CloudTrail created by this Terraform module should support setting up a proper logging integration with CloudWatch.

Is your feature request related to a problem? Please describe
The created CloudTrail is non-compliant with CIS Benchmarks and is listed as a Medium severity in Lacework's generated reports for compliance with AWS ISO 27001:2013 and AWS ISO/IEC 27002:2022.

The non-compliance in question is lacework-global-55.

Describe Preferred Solution
The module creates resources that by default are compliant with CIS Benchmarks.

Add input variables cloudwatch_logs_encryption_enabled, cloudwatch_logs_encryption_key_arn, and cloudwatch_logs_iam_role_arn, and set them in the aws_cloudtrail resource. If no IAM role ARN is provided then one should be created by the module.

Additional Context
I think the changes needed are the following:

variables.tf:

variable "cloudwatch_logs_encryption_enabled" {
  type    = bool
  default = true
}

variable "cloudwatch_logs_encryption_key_arn" {
  type    = string
  default = ""
}

variable "cloudwatch_logs_iam_role_arn" {
  type    = string
  default = ""
}

main.tf:

locals {
  ...
  create_cloudwatch_iam_role = var.cloudwatch_logs_encryption_enabled && var.cloudwatch_logs_iam_role_arn == null
  
  cloudwatch_key_arn = var.cloudwatch_logs_encryption_enabled ? (length(var.cloudwatch_logs_encryption_key_arn) > 0 ? var.cloudwatch_logs_encryption_key_arn : aws_kms_key.lacework_kms_key[0].arn) : ""

  cloudwatch_logstream_arn = "${aws_cloudwatch_log_group.cloudtrail_log_group.arn}:log-stream:${data.aws_caller_identity.current.account_id}_CloudTrail_${data.aws_region.current.name}*" # Reference: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html
}

data "aws_iam_policy_document" "kms_key_policy" {
  ...

  dynamic "statement" {
    for_each = (var.cloudwatch_logs_encryption_enabled && length(var.cloudwatch_logs_encryption_key_arn) == 0) ? [1] : []
    content {
      sid    = "Allow CloudWatch service to encrypt/decrypt"
      effect = "Allow"

      actions = [
        "kms:Encrypt*",
        "kms:Decrypt*",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:Describe*"
      ]

      resources = ["*"]
      
      principals {
        type = "Service"
        identifiers = [
          "logs.${data.aws_region.current.name}.amazonaws.com",
        ]
      }

      condition {
        test     = "ArnEquals"
        variable = "kms:EncryptionContext:aws:logs:arn"
        values = [
          "arn:aws:logs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:log-group:${var.cloudtrail_name}",
        ]
      }
    }
  }
}

resource "aws_cloudwatch_log_group" "cloudtrail_log_group" {
  name              = var.cloudtrail_name
  kms_key_id        = local.cloudwatch_key_arn
  retention_in_days = 90
}

data "aws_iam_policy_document" "cloudtrail_assume_role" {
  count = local.create_cloudwatch_iam_role ? 1 : 0
  
  statement {
    effect = "Allow"

    actions = [
      "sts:AssumeRole",
    ]

    principals {
      type = "Service"
      identifiers = [
        "cloudtrail.amazonaws.com"
      ]
    }
  }
}


data "aws_iam_policy_document" "cloudtrail_logging" {
  count = local.create_cloudwatch_iam_role ? 1 : 0

  statement {
    sid    = "AWSCloudTrailCreateLogStream"
    effect = "Allow"

    actions = [
      "logs:CreateLogStream",
    ]

    resources = [
      local.cloudwatch_logstream_resource,
    ]
  }

  statement {
    sid    = "AWSCloudTrailPutLogEvents"
    effect = "Allow"

    actions = [
      "logs:PutLogEvents",
    ]

    resources = [
      local.cloudwatch_logstream_resource,
    ]
  }
}

resource "aws_iam_policy" "cloudtrail_logging" {
  count = local.create_cloudwatch_iam_role ? 1 : 0
  
  name        = var.cloudtrail_name
  policy      = data.aws_iam_policy_document.cloudtrail_logging[count.index].json
  description = "Allows CloudTrail to create log streams and to put logs in CloudWatch"
}

resource "aws_iam_role" "cloudtrail_logging" {
  count = local.create_cloudwatch_iam_role ? 1 : 0

  name               = var.cloudtrail_name
  assume_role_policy = data.aws_iam_policy_document.cloudtrail_assume_role[count.index].json
}

resource "aws_iam_role_policy_attachment" "cloudtrail_logging" {
  count = local.create_cloudwatch_iam_role ? 1 : 0

  role       = aws_iam_role.cloudtrail_logging[count.index].name
  policy_arn = aws_iam_policy.cloudtrail_logging[count.index].arn
}

resource "aws_cloudtrail" "lacework_cloudtrail" {
  ...
  enable_logging             = true
  cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.cloudtrail_log_group.arn}:*"
  cloud_watch_logs_role_arn  = local.create_cloudwatch_iam_role ? coalesce(var.cloudwatch_logs_iam_role_arn, aws_iam_role.cloudtrail_logging.arn) : null
  enable_log_file_validation = var.enable_log_file_validation
  ...
}

Please note that this code has not been properly tested. I've simply adjusted Terraform configurations that I've found elsewhere.

Thanks!

feat: add a required_providers version constraint

Feature Request

Describe the Feature Request
A user should be able to know the minimum version of the AWS provider that they need to be able to run any given version of this module.

Describe Preferred Solution
Use the required_providers block to set a minimum AWS provider version constraint.

For example, I notice that the multi_region attribute in the KMS key being created here requires version 3.64.0 or higher. It would be a better user experience if Terraform could call out this version constraint specifically, rather than letting a plan/apply fail when the constraint isn't met, as seen below.

Additional Context

$ terraform version
Terraform v0.14.11
+ provider registry.terraform.io/hashicorp/aws v3.57.0

$ terraform plan
Releasing state lock. This may take a few moments...

Error: Unsupported argument

  on .terraform/modules/some_module.lacework_aws_cloudtrail/main.tf line 40, in resource "aws_kms_key" "lacework_kms_key":
  40:   multi_region            = var.kms_key_multi_region

An argument named "multi_region" is not expected here.

feat: Support s3 notifiers for syndicating cloudtrail to sns

Feature Request

Describe the Feature Request
Some folks have already created large buckets that capture many accounts of cloudtrail activity. Allowing that bucket to send notifications to sqs would be more efficient for those end users.

Describe Preferred Solution
A lambda can easily rewrite s3 notifications into a cloudtrail event.

bug: [Action Required] S3 changes to bucket authorization are coming in October 2023

Describe the bug
AWS just recently sent out this mass email to anyone with an affected bucket policy, which would include a lot of users of this module.

Click here for AWS's email Hello,

Amazon S3 is updating its bucket policy evaluation to make it consistent with how similar policies are evaluated across AWS. If you have a statement in one of your bucket policies whose principal refers to the account that owns the bucket, this statement will now apply to all IAM Principals in that account instead of applying to only the account's root identity. This update will affect buckets in at least one of your AWS accounts. We will being deploying this update on October 20, 2023.

We are requesting that you review your bucket policies and access patterns, and update your bucket policies before October 20, 2023, to ensure that they correctly reflect your expected access patterns in S3.

Examples for common permission scenarios that you can use today to update your bucket policies are included at the end of this message.

The following S3 buckets in the US-EAST-1 Region will be affected by the update to bucket authorization:

aws_s3_bucket.cloudtrail_log_bucket[0]

Summary of update to bucket authorization:

The authorization update applies to S3 bucket permission statements that follow this pattern:

{
"Effect": "Deny",
"Principal": {
"AWS": $BUCKET_OWNER_ACCOUNT_ID
},
"Action":
"Resource":
"Condition":
}

Currently, for the previously listed buckets, S3 applies this statement only to requests made by the bucket-owning account's root identity, and not to IAM Principals within the bucket-owning account. The authorization behavior update will make S3 consistent with policy evaluation in other AWS services. With this update, the previous statement applies to all IAM Principals in the bucket-owning account.

Recommended policy updates:

The following recommendations will work both with the current and updated authorization behavior in S3, and therefore you can implement them today. We strongly recommend testing them on a test bucket before putting them on a bucket that serves a production workload.

Usually, the "Principal": {"AWS": $BUCKET_OWNER_ACCOUNT_ID} is not necessary at all. Since the intention of these bucket policy statements is to assert some behavior across all callers, it can usually be replaced by "Principal": "*".

// Recommended alternative 1: This statement will apply to all callers

{
"Effect": "Deny",
"Principal": "*",
"Action":
"Resource":
"Condition":
}

In the less common case where your intention is for the statement to apply specifically to IAM Principals in the bucket-owning account, the following permission statement will work:

// Recommended alternative 2 (less common): This statement will apply to all callers from the bucket-owning account

{
"Effect": "Deny",
"Principal": "*",
"Action":
"Resource":
"Condition": {
"StringEquals": {
"aws:PrincipalAccount": $BUCKET_OWNER_ACCOUNT_ID
},
...
}
}

Important: Avoiding S3 bucket lockout

A Deny statement in an S3 bucket policy that is unintentionally overly broad can result in Access Denied errors for all users in the account, including actions such as s3:PutBucketPolicy that might be needed to remediate the accidentally-broad statement. Because the s3:PutBucketPolicy action can be taken only by IAM Principals within the same account as the bucket, and never by external accounts, it is not necessary to deny access to this action outside the account. However, it is possible to deny this action within the same account, and when that is applied overly broadly, bucket lockout can occur.

You can remediate bucket-lockout situations by signing in with your account's root identity and updating the bucket policy.

To avoid this situation in the first place, we recommend as a best practice that you avoid using Deny statements that cover all S3 actions (s3:* ) on the bucket resource (arn:aws:s3:::example-bucket). While these statements will work as desired when the Conditions elements are correctly configured, a mistake in specifying those Conditions will result in a full lockout. In particular, a statement such as the following one will prevent all further changes to the S3 bucket's policy except by the account's root identity.

// DO NOT USE - WILL RESULT IN BUCKET LOCKOUT

{
"Effect": "Deny",
"Principal": "",
"Action": "s3:
",
"Resource": "arn:aws:s3:::example-bucket"
}

Furthermore, S3 recommends against the use of the NotPrincipal element in IAM statements. Most common permission scenarios can be implemented more straightforwardly with conditions, as detailed in the previous examples. For more information on the 'NotPrincipal' element, please refer the IAM documentation page [1].

If you have questions or concerns, please contact AWS Support [4].

The following are common scenarios and examples:

  1. Scenario: Block access to an S3 bucket's data from outside a list of desired AWS accounts:

A common scenario is to add a bucket policy statement that asserts that account(s) outside a given list cannot access the data in an S3 bucket.

We recommend a bucket policy statement like the following one:

{
"Sid": "BlockAccessToDataOutsideAllowedAccountList",
"Effect": "Deny",
"Principal": "",
"Action": "s3:
",
"Resource": "arn:aws:s3:::example-bucket/*",
"Condition": {
"StringNotEquals": {
"aws:PrincipalAccount": [ "111111111111", "222222222222", ... ]
}
}
},

This statement grants no access, you would need Allow statements (not shown) to grant access for example to account 222222222222 - but will block data access from accounts not on this list, regardless of what other policy statements or ACLs are present.

It is not correct to use 'aws:SourceAccount' or 'aws:SourceArn' to achieve this goal. These IAM values are used for a different use case, such as, one in which an AWS service is accessing your bucket. For limiting access to specific IAM Principals, we recommend using the 'aws:PrincipalArn' condition. Please refer the documentation on all IAM global conditions, including the above [3].

  1. Scenario: Block access to an S3 bucket's data from outside known network locations (IP or VPC):

A common scenario is to add a bucket policy statement that prevents interaction (read/write) of the bucket's data from outside a list of known network locations. These locations might be public IPv4 address ranges, if using the general public endpoint of the S3 service, or particular Virtual Private Clouds (VPCs), when reaching S3 by way of a VPC Endpoint.

A bucket policy statement like the following one will prevent any access to objects in the bucket, except from the allowed network paths:

{
"Sid": "BlockAccessToDataOutsideAllowedNetworkLocations",
"Effect": "Deny",
"Principal": "",
"Action": "s3:
",
"Resource": "arn:aws:s3:::example-bucket/*",
"Condition": {
"NotIPAddressIfExists": {
"aws:SourceIp": [ "1.2.3.0/24", ... ]
},
"StringNotEqualsIfExists": {
"aws:SourceVpc": [ "vpc-111", "vpc-222", ... ]
}
}
},

If you are also trying to block List operations from outside the specified network paths, you can add this statement.

{
"Sid": "BlockListOperationsOutsideAllowedNetworkLocations",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::example-bucket",
"Condition": {
"NotIPAddressIfExists": {
"aws:SourceIp": [ "1.2.3.0/24", ... ]
},
"StringNotEqualsIfExists": {
"aws:SourceVpc": [ "vpc-111", "vpc-222", ... ]
}
}
},

  • Note the specific s3:ListBucket action; had the action instead been a generic s3:*, this policy would also have denied all actions on the bucket from outside those network paths, which can be undesired. For example, the ability to make further changes to bucket policy. See the previous discussion on avoiding bucket lockout.
  1. Scenario: Blocking unencrypted uploads:

S3 offers a "default encryption" option for buckets that allow a customer to designate a desired Server-Side Encryption (SSE) scheme, either SSE-S3 or SSE-KMS. Because individual PutObject requests in S3 can specify schemes different from the bucket default, a common scenario for customers who want to be certain that no unencrypted data can be uploaded is to write an S3 bucket policy that blocks s3:PutObject from succeeding, unless the desired SSE scheme will be used for the object.

The following example bucket policy statement asserts that, in order to succeed, any PutObject request to this bucket must be SSE-KMS encrypted with a KMS key from account 111122223333.

{
"Sid": "BlockNonKMSUploads",
"Effect": "Deny",
"Principal": "",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::example-bucket/
",
"Condition": {
"ArnNotLikeIfExists": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:111122223333:key/*"
}
}
}

Other common patterns: The condition s3:x-amz-server-side-encryption can be used for a more generic permission statement to require SSE-S3 (AES256) or SSE-KMS (aws:kms) respectively. A few other example policies for encryption are available in our Knowledge Center article [2].

Although the aws:PrincipalAccount IAM condition can be used to limit the effect of these policy statements to particular AWS accounts, that is usually unnecessary for the common case in which you are simply trying to assert that all data that gets written is encrypted in the desired way.

[1] https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notprincipal.html
[2] https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-store-kms-encrypted-objects/
[3] https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html
[4] https://aws.amazon.com/support

Sincerely,
Amazon Web Services

Steps to reproduce

Umm, just look at the bucket policy and see if it matches the pattern described in AWS's email, I guess?

Expected behavior

This module should follow AWS's suggestion, most likely by using what they refer to as Recommended alternative 1:

    principals {
      type        = "AWS"
      identifiers = ["*"]
    }

Also in the section where they talk about bucket-lockout, they mention:

we recommend as a best practice that you avoid using Deny statements that cover all S3 actions (s3:* ) on the bucket resource (arn:aws:s3:::example-bucket).

This module may also want to follow that suggestion by removing "arn:aws:s3:::${local.log_bucket_name}" from the resources in the bucket policy, and having it only apply to "arn:aws:s3:::${local.log_bucket_name}/*".

Please complete the following information):

  • Terraform Version: Terraform v1.5.6
  • Module Version 2.7.6

Dependency conflicts with terraform-aws-cloudtrail

@afiune Sorry for the late inspection on this, I had typed it up in a review of lacework/terraform-aws-ecr#1 and didn't get it submitted in time.

This module currently appears to have dependency conflicts if used in conjunction with our CloudTrail module - we'll need to also allow the use of the 0.3.x Lacework provider for the CloudTrail module, or users will get the following on a terraform init:

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider
lacework/lacework: no available releases match the given constraints ~> 0.2.0,
~> 0.2, ~> 0.3

This appears to be caused by the the Lacework provider version being limited to version = "~> 0.2.0" (

version = "~> 0.2.0"
) which only allows the right-most version component to increment - effectively limiting it to 0.2.x (https://www.terraform.io/docs/language/expressions/version-constraints.html#gt--1)

feat: bucket_logs_enabled logs never expire

Feature Request

Describe the Feature Request

When bucket_logs_enabled is enabled, a logging bucket will be created and access logs from the cloudtrail bucket will be delivered to the logging bucket.

But those access logs in the logging bucket will persist forever without ever expiring. It would be nice to give the user a configurable way to expire those logs, such as a logging_bucket_lifecycle_expiration_in_days variable. If this variable is set by the user, then a lifecycle configuration could be created to expire old logs from the logging bucket after N days.

bug: Integrate Existing Consolidated CloudTrail - terraform rerun deletes manually attached SNS topic

Describe the bug
We have reconfigured our Cloudtrail integration to use terraform since we have also added two new AWS accounts and our old integration was done manually.
I followed instructions here:
There is this section:

If you do not have an existing SNS topic configured on the existing CloudTrail, the Terraform module automatically creates one, but you must manually attach the SNS topic to the existing CloudTrail.

After doing the manual attachment, if I run the terraform apply again it will remove the manual attachment:

Terraform will perform the following actions:

  # module.altoira_cloudtrail[0].aws_sns_topic_policy.default[0] will be updated in-place
  ~ resource "aws_sns_topic_policy" "default" {
        id     = "arn:aws:sns:us-east-1:<redacted>:lacework-ct-sns-de98b010"
      ~ policy = jsonencode(
          ~ {
              ~ Statement = [
                    {
                        Action    = "SNS:Publish"
                        Effect    = "Allow"
                        Principal = {
                            Service = "cloudtrail.amazonaws.com"
                        }
                        Resource  = "arn:aws:sns:us-east-1:<redacted>:lacework-ct-sns-de98b010"
                        Sid       = "AWSCloudTrailSNSPolicy20131101"
                    },
                  - {
                      - Action    = "SNS:Publish"
                      - Condition = {
                          - StringEquals = {
                              - "AWS:SourceArn" = "arn:aws:cloudtrail:us-east-1:<redacted>:trail/management-events"
                            }
                        }
                      - Effect    = "Allow"
                      - Principal = {
                          - Service = "cloudtrail.amazonaws.com"
                        }
                      - Resource  = "arn:aws:sns:us-east-1:<redacted>:lacework-ct-sns-de98b010"
                      - Sid       = "AWSCloudTrailSNSPolicy20150319"
                    },
                ]
                # (1 unchanged element hidden)
            }
        )
        # (2 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

This doesn’t seem like a workable solution since terraform should be run against an environment on an ongoing basis. Can you provide a solution?
Thanks
Steps to reproduce
Outlined in description

Expected behavior
After doing the manual attachment it should be ok to run the terraform configuration again without deleting attachment configuration

Please complete the following information):

» tf version                                                                                                            altoira-nonprod
Terraform v1.3.8
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.54.0
+ provider registry.terraform.io/hashicorp/random v3.4.3
+ provider registry.terraform.io/hashicorp/time v0.9.1
+ provider registry.terraform.io/lacework/lacework v1.4.0

TF modules pinning notes

That's a kind of "latest" pin

module "lacework_ct_iam_role" {
source = "lacework/iam-role/aws"
version = "~> 0.3"

In case, if something will be broken in version 0.9999 - all module versions from the time when version = "~> 0.3" was introduced will become broken without any changes to the code.

Which is a little bit violating https://reproducible-builds.org/ (a little bit, because the main reason this site is not about infra at all)

Regarding TF best practices:

For modules maintained within your organization, specifying version ranges may be appropriate if semantic versioning is used consistently or if there is a well-defined release process that avoids unwanted updates.

it +- okay, because you manage both modules, and if you have cross-module change testing CI somewhere.

But if not - better not to have such floating stuff for modules. And update versions when you need or, automate these updates by Renovate/dependabot. For example, here is a quick start solution - https://github.com/SpotOnInc/renovate-config/.

feat: Use existing S3 bucket for access logging

Feature Request

I am already making a change for this. The PR will be up shortly

Description

We use one central bucket for access logging on S3 which we would like to incorporate into Lacework. Unfortunately to do this currently would mean creating our own cloudtrail and bucket which can then tie into the existing access log bucket. As an alternative it seems pretty simple to add an option to use an existing logging bucket and control the logging prefix.

Solution

Adding a use_existing_access_log_bucket flag which will prevent the default access logs bucket from being created.
Adding a access_log_prefix to control logging prefix in said log bucket.

Default logging bucket should only be created if use_existing_cloudtrail and use_existing_access_log_bucket are both false and if bucket_enable_logs is set to true.

To control the selection of the existing log bucket the already created log_bucket_name variable will be used. The access_log_prefix will be defaulted to "logs/" but can be changed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.