Giter Club home page Giter Club logo

edgelesssys / constellation Goto Github PK

View Code? Open in Web Editor NEW
929.0 15.0 48.0 43.88 MB

Constellation is the first Confidential Kubernetes. Constellation shields entire Kubernetes clusters from the (cloud) infrastructure using confidential computing.

License: GNU Affero General Public License v3.0

Dockerfile 0.06% Go 83.11% Shell 2.20% Smarty 0.91% Makefile 0.22% HCL 4.29% XSLT 0.12% Mustache 0.38% Starlark 8.44% Nix 0.28%
cloud-security confidential-computing data-encryption kubernetes kubernetes-security

constellation's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

constellation's Issues

`constellation config generate` creates broken configuration file

Issue description

When creating a Constellation on host systems that are not Linux or not amd64, constellation config generate would generate a broken microservice version:

Problems validating config file:
    image specifies an invalid version: configured version (v2.7.0) does not adhere to SemVer syntax
    microserviceVersion specifies an invalid version: configured version (v) does not adhere to SemVer syntax
Fix the invalid entries or generate a new configuration using `constellation config generate`
Error: invalid configuration

constellation-conf.yaml:

version: v2 # Schema version of this configuration file.
image: v2.7.0 # Machine image version used to create Constellation nodes.
[...]
microserviceVersion: v # Microservice version to be installed into the cluster. Defaults to the version of the CLI.
[...]

To reproduce

Steps to reproduce the behavior:

  1. Download the Constellation CLI in version 2.7.0 for a platform that is not linux/amd64
    wget https://github.com/edgelesssys/constellation/releases/download/v2.7.0/constellation-darwin-arm64
    chmod +x constellation-darwin-arm64
    
  2. Generate a config file for any cloud provider: constellation config generate gcp
  3. Look a the microserviceVersion field in the generated config file.
  4. You can spot the issue by running constellation version, where the version field only shows the kind of build (Enterprise) but no actual version information
    Version:	 (Enterprise build; see documentation for license agreement)
    GitCommit:	d56b0ef75f5a5019f608f4907d1cbcea4b749799
    GitTreeState:	clean
    BuildDate:	1970-01-01T00:00:00Z
    GoVersion:	go1.20.2 X:nocoverageredesign
    Compiler:	bazel/gc
    Platform:	linux/arm64
    

Environment

  • constellation version: v2.7.0
  • Host platform that is not linux/amd64

Expected behavior

The config file should be generated correctly and it should be possible to create a Constellation with it.

Mitigation

Use the Constellation cli on linux/amd64.

`constellation init` prints "Warning: Encountered untrusted PCR value at index 5"

Issue description

You may see this warning when initializing a cluster. This neither affects security nor functionality. Please ignore the warning.

Background: the CLI v2.6.0 includes a wrong value for the optional measurement 5. This will be corrected with the next release.

Environment

  • constellation version: v2.6.0

Expected behavior

No warning should be printed on init.

Additional info / screenshot

Enable use of discussion in this repository

Use case

would be great to have the possibility to use discussion in this repository.
At this time, there is only the topic @msanft has converted from issue to discussion
#2785 => #2786
but it is not possible to start a new discussion

Describe your solution

enable users to start discussions.
In this repository, at https://github.com/edgelesssys/constellation/discussions
the button is missing (disabled):
2024-01-17_12h13_04

When enabled, it should look like this:
2024-01-17_12h12_38

Would you be willing to implement this feature?

  • Yes, I could contribute this feature.

`constellation init` fails with cert-manager installation error

Issue description

The cert-manager installation via helm times out during constellation init.
This happens in about 1/3 of initializations.

To mitigate this issue, terminate, re-create and re-initialize the cluster:

constellation terminate
# Either re-use or remove the "constellation-mastersecret.json"
constellation create
constellation init

To reproduce

Steps to reproduce the behavior:

$ constellation create --control-plane-nodes 3 --worker-nodes 2
The following Constellation cluster will be created:
3 control-plane nodes of type n2d-standard-4 will be created.
2 worker nodes of type n2d-standard-4 will be created.
Do you want to create this cluster? [y/n]: y
Your Constellation cluster was created successfully.
$ constellation init
Using community license.
For details, see https://docs.edgeless.systems/constellation/overview/license
Your Constellation master secret was successfully written to ./constellation-mastersecret.json
Warning: Encountered untrusted PCR value at index 5
Cluster initialization failed. This error is not recoverable.
Terminate your cluster and try again.
Error: init call: rpc error: code = Internal desc = initializing cluster: installing cert-manager: helm install: context deadline exceeded

Environment

  • constellation version: v2.6.0

Expected behavior

The installation of the cert-manager should not time out and the initialization should succeed.

Additional info / screenshot

Microservice deployments via helm

Feature

Deploy the cluster-managing and Constellation's own microservices via helm charts.
Facilitates orchestrational tasks like updating these services.

  • Autoscaler
  • Operators
  • Cloud Controller Manager
  • Coud Node Manager
  • GCP CSI
  • Azure CSI
  • Key Management Service
  • Operator Lifecycle Manager
  • Join Service
  • Verification Service

Can I run Constallation K8s CLuster outside of GCP, AWS or Azure ?

Hello,

I just came across this project, and I'm quite interested in testing it out, but reading the docs I often see Azure or GCP as a reference and that I have to do things where there IAM service before getting started. As I don't do any business with these companies out of various reasons, I would like to test Constellation outside these public clouds, e.g. at Hetzner or at home or basically anywhere else.

Kindly asking for feedback.

Error during `upgrade apply` with kubernetes-only upgrade

Issue description

When running upgrade apply the follow error message is shown: Error: upgrading NodeVersion: expected NodeVersion to contain /CommunityGalleries/ConstellationCVM-728bd310-e898-4450-a1ed-21cf2fb0d735/Images/main-nightly/Versions/2023.0406.182050, got /communityGalleries/ConstellationCVM-b3782fa0-0df7-4f2f-963e-fc7fc42663df/images/constellation/versions/2.6.0

To reproduce

Steps to reproduce the behavior:
Any config that will trigger a Kubernetes upgrade, but no image upgrade will print the above error. An example could be:

  1. Setup a Constellation cluster with Constellation CLI v2.6.0 and Kubernetes version v1.25.7
  2. Change configured Kubernetes version to v1.26.2
  3. Change configured image version to /CommunityGalleries/ConstellationCVM-728bd310-e898-4450-a1ed-21cf2fb0d735/Images/main-nightly/Versions/2023.0406.182050
  4. Run upgrade apply

Expected behavior

This error is not shown.

Mitigation

The error message itself is harmless. It does not necessarily indicate that the upgrade failed. To check if the upgrade succeeded run constellation status and compare the reported values with your local config.
The error is fixed in #1630.

Certain errors during create/init are outputted only in STOUT not STDERR

Issue description

When running certain commands that results in an error, such as using the wrong credentials in constellation-conf.yaml or a CSP credentials with not enough privileges.

To reproduce

Steps to reproduce the behavior:

  1. Run constellation config generate
  2. Add your credentials in constellation-conf.yaml file (maybe falsify your credentials to trigger an error when exeucting other commands after)
  3. Run constellation create --control-plane-nodes 1 --worker-nodes 1 --name test -y to try to create a cluster
  4. Notice how there is an error in the STDOUT but not in STDERR. (You may need to monitor both STDOUT and STDERR beforehand to properly replicate the bug)
  5. This is a sample of the error that could be shown:
An error occurred: PUT https://management.azure.com/subscriptions/6b6a1f27-55c1-4b1d-969b-60a3c9eebe64/resourceGroups/azure-customertwo-az-sandbox-84v4y/providers/Microsoft.Insights/components/constellation-insights-nm4rV
--------------------------------------------------------------------------------
RESPONSE 403: 403 Forbidden
ERROR CODE: AuthorizationFailed
--------------------------------------------------------------------------------
{
  "error": {
    "code": "AuthorizationFailed",
    "message": "The client '<ID>' with object id '<ID>' does not have authorization to perform action 'Microsoft.Insights/components/write' over scope '/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Insights/components/constellation-insights-nm4rV' or the scope is invalid. If access was recently granted, please refresh your credentials."
  }
}
--------------------------------------------------------------------------------

Attempting to roll back.

Environment

  • constellation version:
  • constellation-conf.yaml
    • (make sure to remove sensitive information, e.g., yq e 'del(.provider.*.project)' constellation-conf.yaml)
  • VM type used to run Constellation.

Expected behavior

As a developer, I expect a failure to output both on STDOUT and STDERR so I can integrate and handle failures more easily

`constellation mini up` times out

Issue description

constellation mini up times out during creation.

To reproduce

Steps to reproduce the behavior:

  1. Download the CLI
  2. Execute constellation mini up --debug
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:121       Checked arch and os
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:126       Checked that /dev/kvm exists
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:134       Checked CPU cores - there are 8
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:152       Scanned for available memory
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:160       Checked available memory, you have 31GB available
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:170       Checked for free space available, you have 25GB available
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:177       Preparing configuration
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:202       Configuration path is ""
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:224       Prepared configuration
2023-04-06T11:36:01Z    DEBUG   cmd/miniup.go:231       Creating mini cluster
An error occurred: exit status 1

Error: couldn't retrieve IP address of domain id: 29962dc6-14e7-4524-89d9-320f90ce2c3b. Please check following: 
1) is the domain running proplerly? 
2) has the network interface an IP address? 
3) Networking issues on your libvirt setup? 
 4) is DHCP enabled on this Domain's network? 
5) if you use bridge network, the domain should have the pkg qemu-agent installed 
IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup 
 timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)

  with module.worker.libvirt_domain.instance_group[0],
  on modules/instance_group/main.tf line 15, in resource "libvirt_domain" "instance_group":
  15: resource "libvirt_domain" "instance_group" {


Attempting to roll back.
Rollback succeeded.
Error: creating cluster: exit status 1

Error: couldn't retrieve IP address of domain id: 29962dc6-14e7-4524-89d9-320f90ce2c3b. Please check following: 
1) is the domain running proplerly? 
2) has the network interface an IP address? 
3) Networking issues on your libvirt setup? 
 4) is DHCP enabled on this Domain's network? 
5) if you use bridge network, the domain should have the pkg qemu-agent installed 
IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup 
 timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)

  with module.worker.libvirt_domain.instance_group[0],
  on modules/instance_group/main.tf line 15, in resource "libvirt_domain" "instance_group":
  15: resource "libvirt_domain" "instance_group" {

Environment

  • constellation version: 2.7.0-pre.0.20230405160928-1c03b066a61b
  • constellation-conf.yaml
version: v2 # Schema version of this configuration file.
image: v2.6.0 # Machine image version used to create Constellation nodes.
name: mini # Name of the cluster.
stateDiskSizeGB: 8 # Size (in GB) of a node's disk to store the non-volatile state.
kubernetesVersion: v1.25.8 # Kubernetes version to be installed into the cluster.
microserviceVersion: v2.7.0-pre.0.20230405160928-1c03b066a61b # Microservice version to be installed into the cluster. Defaults to the version of the CLI.
debugCluster: false # DON'T USE IN PRODUCTION: enable debug mode and use debug images. For usage, see: https://github.com/edgelesssys/constellation/blob/main/debugd/README.md
# Supported cloud providers and their specific configurations.
provider:
    # Configuration for QEMU as provider.
    qemu:
        imageFormat: raw # Format of the image to use for the VMs. Should be either qcow2 or raw.
        vcpus: 2 # vCPU count for the VMs.
        memory: 2048 # Amount of memory per instance (MiB).
        metadataAPIServer: ghcr.io/edgelesssys/constellation/qemu-metadata-api:v2.7.0-pre.0.20230330151913-6a2c9792e0ce@sha256:8283f9606366beaf05142aeef09a905085bc7cde071f43b43290a7f087994264 # Container image to use for the QEMU metadata server.
        libvirtSocket: "" # Libvirt connection URI. Leave empty to start a libvirt instance in Docker.
        libvirtContainerImage: ghcr.io/edgelesssys/constellation/libvirt:v2.7.0-pre.0.20230330151913-6a2c9792e0ce@sha256:56d218cc501d471d25f6a6da940db48a008a040bc26dea1fe8a61edf9ca7ce73 # Container image to use for launching a containerized libvirt daemon. Only relevant if `libvirtSocket = ""`.
        nvram: production # NVRAM template to be used for secure boot. Can be sentinel value "production", "testing" or a path to a custom NVRAM template
        firmware: "" # Path to the OVMF firmware. Leave empty for auto selection.
        # Measurement used to enable measured boot.
        measurements:
            4:
                expected: dfd62a251a234d2eccdbb4659e4990c330f3e4a4456f7274c769ad43c174c7c4
                warnOnly: false
            8:
                expected: "0000000000000000000000000000000000000000000000000000000000000000"
                warnOnly: false
            9:
                expected: a13d79910d9d98480c0d5c3f197d383d5d26895e350225e219bf286a7402e780
                warnOnly: false
            11:
                expected: "0000000000000000000000000000000000000000000000000000000000000000"
                warnOnly: false
            12:
                expected: c2b1e24a8ff734ea9546622da8ba438c6799be6b6cb4e6593d9411e9ea084c0c
                warnOnly: false
            13:
                expected: "0000000000000000000000000000000000000000000000000000000000000000"
                warnOnly: false
            15:
                expected: "0000000000000000000000000000000000000000000000000000000000000000"
                warnOnly: false
  • VM type used to run Constellation.
    Miniconstellation was started on an Azure VM of type Standard_D8as_v5.

Expected behavior

The miniconstellation should be correctly created and initialized.

Additional info

We currently think that it has to do with using an AMD VM as the host. But note, that we could only reproduce this in a cloud environment. Using local AMD machines worked fine.

Mitigation

Switch the Azure VM's type to Standard_D8s_v5.

Support new AWS image region: eu-west-3

Use case

According to the doc, Constellation OS images are not available in region eu-west-3. Could you please make it available?

Describe your solution

Make Constellation images available in region eu-west-3

Additional context

Need to deploy in this region

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Package lookup failures

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • deps: update dependency aspect_bazel_lib to v2.7.8
  • deps: update rhysd/actionlint to v1.7.1 (com_github_rhysd_actionlint_darwin_amd64, com_github_rhysd_actionlint_darwin_arm64, com_github_rhysd_actionlint_linux_amd64, com_github_rhysd_actionlint_linux_arm64)
  • deps: update GitHub action dependencies (mikepenz/action-junit-report, softprops/action-gh-release)
  • deps: update Go dependencies (cloud.google.com/go/kms, github.com/Azure/azure-sdk-for-go/sdk/azidentity, github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/network/armnetwork/v5, github.com/aws/aws-sdk-go, github.com/siderolabs/talos/pkg/machinery)
  • deps: update Terraform dependencies (aws, azuread, azurerm, cloudinit, google, google-beta, libvirt, random, stackit)
  • deps: update bufbuild/buf to v1.34.0 (com_github_bufbuild_buf_darwin_amd64, com_github_bufbuild_buf_darwin_arm64, com_github_bufbuild_buf_linux_amd64, com_github_bufbuild_buf_linux_arm64)
  • deps: update dependency bazel_skylib to v1.7.1
  • deps: update dependency com_github_medik8s_node_maintainance_operator to v0.17.0
  • deps: update dependency containernetworking/plugins to v1.5.1
  • deps: update dependency kubernetes-sigs/cri-tools to v1.30.0
  • deps: update dependency rules_python to v0.33.2
  • deps: update golangci/golangci-lint to v1.59.1 (com_github_golangci_golangci_lint_darwin_amd64, com_github_golangci_golangci_lint_darwin_arm64, com_github_golangci_golangci_lint_linux_amd64, com_github_golangci_golangci_lint_linux_arm64)
  • deps: update public.ecr.aws/eks/aws-load-balancer-controller Docker tag to v2.8.1
  • deps: update quay.io/medik8s/node-maintenance-operator Docker tag to v0.17.0
  • deps: update Terraform constellation to v2
  • deps: update Terraform openstack to v2
  • deps: update cachix/install-nix-action action to v27
  • deps: update docker/build-push-action action to v6
  • deps: update fedora Docker tag to v41
  • deps: update ubuntu Docker tag to v24
  • deps: update Python dependencies (Pillow, cryptography, matplotlib, pycparser, sev-snp-measure)
  • deps: update dependency @mdx-js/react to v3
  • deps: update dependency clsx to v2
  • deps: update dependency numpy to v2
  • deps: update docusaurus monorepo to v3 (major) (@docusaurus/core, @docusaurus/module-type-aliases, @docusaurus/plugin-google-gtag, @docusaurus/preset-classic, @docusaurus/theme-mermaid)
  • 🔐 Create all rate-limited PRs at once 🔐

Edited/Blocked

These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.


Warning

Renovate failed to look up the following dependencies: Could not determine new digest for update (docker package ghcr.io/edgelesssys/constellation/vpn).

Files affected: dev-docs/howto/vpn/helm/values.yaml


Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

bazel
3rdparty/bazel/com_github_medik8s_node_maintainance_operator/source.bzl
  • com_github_medik8s_node_maintainance_operator v0.15.0
bazel/toolchains/ci_deps.bzl
  • com_github_koalaman_shellcheck_linux_amd64 v0.10.0
  • com_github_koalaman_shellcheck_linux_arm64 v0.10.0
  • com_github_koalaman_shellcheck_darwin_amd64 v0.10.0
  • com_github_rhysd_actionlint_linux_amd64 v1.7.0
  • com_github_rhysd_actionlint_linux_arm64 v1.7.0
  • com_github_rhysd_actionlint_darwin_amd64 v1.7.0
  • com_github_rhysd_actionlint_darwin_arm64 v1.7.0
  • com_github_mvdan_gofumpt_linux_amd64 v0.6.0
  • com_github_mvdan_gofumpt_linux_arm64 v0.6.0
  • com_github_mvdan_gofumpt_darwin_amd64 v0.6.0
  • com_github_mvdan_gofumpt_darwin_arm64 v0.6.0
  • com_github_aquasecurity_tfsec_linux_amd64 v1.28.6
  • com_github_aquasecurity_tfsec_linux_arm64 v1.28.6
  • com_github_aquasecurity_tfsec_darwin_amd64 v1.28.6
  • com_github_aquasecurity_tfsec_darwin_arm64 v1.28.6
  • com_github_golangci_golangci_lint_linux_amd64 v1.58.1
  • com_github_golangci_golangci_lint_linux_arm64 v1.58.1
  • com_github_golangci_golangci_lint_darwin_amd64 v1.58.1
  • com_github_golangci_golangci_lint_darwin_arm64 v1.58.1
  • com_github_bufbuild_buf_linux_amd64 v1.31.0
  • com_github_bufbuild_buf_linux_arm64 v1.31.0
  • com_github_bufbuild_buf_darwin_amd64 v1.31.0
  • com_github_bufbuild_buf_darwin_arm64 v1.31.0
  • com_github_katexochen_ghh_linux_amd64 v0.3.5
  • com_github_katexochen_ghh_linux_arm64 v0.3.5
  • com_github_katexochen_ghh_darwin_amd64 v0.3.5
  • com_github_katexochen_ghh_darwin_arm64 v0.3.5
bazel/toolchains/container_images.bzl
  • distroless_static sha256:41972110a1c1a5c0b6adb283e8aa092c43c31f7c5d79b8656fbffff2c3e61f05
  • libvirtd_base sha256:99dbf3cf69b3f97cb0158bde152c9bc7c2a96458cf462527ee80b75754f572a7
bazel/toolchains/multirun_deps.bzl
  • com_github_ash2k_bazel_tools ad2d84beb4e577bda323c8517533b046ed34e6ad
bazel/toolchains/nixpkgs_deps.bzl
  • io_tweag_rules_nixpkgs v0.11.1
bazel/toolchains/oci_deps.bzl
  • rules_oci c622bf79d269473d3d9bc33510e16cfd9a1142bc
bazel-module
MODULE.bazel
  • aspect_bazel_lib 2.7.7
  • bazel_skylib 1.6.1
  • gazelle 476a9447de621f3d6ab0154cc5683b989c79f9c1
  • hermetic_cc_toolchain 3.1.0
  • rules_cc 0.0.9
  • rules_go 0.48.1
  • rules_pkg 0.10.1
  • rules_proto 6.0.2
  • rules_python 0.32.2
  • buildifier_prebuilt 6.4.0
bazelisk
.bazelversion
  • bazel 7.2.0
dockerfile
.github/actions/versionsapi/Dockerfile
  • golang 1.22.4@sha256:a66eda637829ce891e9cf61ff1ee0edf544e1f6c5b0e666c7310dce231a66f28
3rdparty/gcp-guest-agent/Dockerfile
  • ubuntu 22.04@sha256:a6d2b38300ce017add71440577d5b0a90460d0e57fd7aec21dd0d1b0761bbfb2
debugd/filebeat/Dockerfile
  • fedora 40@sha256:5ce8497aeea599bf6b54ab3979133923d82aaa4f6ca5ced1812611b197c79eb0
debugd/logstash/Dockerfile
  • fedora 40@sha256:5ce8497aeea599bf6b54ab3979133923d82aaa4f6ca5ced1812611b197c79eb0
  • fedora 40@sha256:5ce8497aeea599bf6b54ab3979133923d82aaa4f6ca5ced1812611b197c79eb0
debugd/metricbeat/Dockerfile
  • fedora 40@sha256:5ce8497aeea599bf6b54ab3979133923d82aaa4f6ca5ced1812611b197c79eb0
docs/screencasts/docker/Dockerfile
  • ubuntu 22.04@sha256:a6d2b38300ce017add71440577d5b0a90460d0e57fd7aec21dd0d1b0761bbfb2
github-actions
.github/actions/artifact_download/action.yml
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
.github/actions/artifact_upload/action.yml
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
.github/actions/build_cli/action.yml
  • sigstore/cosign-installer v3.5.0@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20
.github/actions/build_micro_service/action.yml
  • docker/metadata-action v5.5.1@8e5442c4ef9f78752691e2d8f8d19755c6f78e81
  • docker/build-push-action v5.4.0@ca052bb54ab0790a636c9b5f226502c73d547a25
.github/actions/cdbg_deploy/action.yml
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/constellation_destroy/action.yml
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/constellation_iam_destroy/action.yml
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/container_registry_login/action.yml
  • docker/login-action v3.2.0@0d4c9c5ea7693da7b068278f7b52bda2a190a446
.github/actions/container_sbom/action.yml
  • sigstore/cosign-installer v3.5.0@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20
.github/actions/deploy_logcollection/action.yml
  • azure/setup-helm v4.2.0@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814
.github/actions/download_release_binaries/action.yml
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
.github/actions/e2e_attestationconfigapi/action.yml
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/e2e_benchmark/action.yml
  • actions/setup-python v5.1.0@82c7e631bb3cdc910f68e0081d67478d79c6982d
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/e2e_cleanup_timeframe/action.yml
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/e2e_mini/action.yml
  • hashicorp/setup-terraform v3.1.1@651471c36a6092792c552e8b1bef71e592b462d8
.github/actions/e2e_sonobuoy/action.yml
  • mikepenz/action-junit-report v4.2.2@ac30be7acb0a361e5492575ab42e47fcadec4928
.github/actions/e2e_test/action.yml
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/e2e_verify/action.yml
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/find_latest_image/action.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/login_azure/action.yml
  • azure/login v2.1.1@6c251865b4e6290e7b78be643ea2d005bc51f69a
.github/actions/login_gcp/action.yml
  • google-github-actions/auth v2.1.3@71fee32a0bb7e97b4d33d548e7d957010649d8fa
  • google-github-actions/setup-gcloud v2.1.0@98ddc00a17442e89a24bbf282954a3b65ce6d200
.github/actions/publish_helmchart/action.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • peter-evans/create-pull-request v6.1.0@c5a7806660adbe173f04e3e038b0ccdcd758773c
.github/actions/select_image/action.yml
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/actions/setup_bazel_nix/action.yml
  • cachix/install-nix-action v26@8887e596b4ee1134dae06b98d573bd674693f47c
.github/actions/upload_terraform_module/action.yml
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
.github/workflows/assign_reviewer.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
.github/workflows/aws-snp-launchmeasurement.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • cachix/install-nix-action v26@8887e596b4ee1134dae06b98d573bd674693f47c
  • robinraju/release-downloader v1.10@c39a3b234af58f0cf85888573d361fb6fa281534
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ubuntu 22.04
.github/workflows/build-binaries.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
.github/workflows/build-ccm-gcp.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/setup-go v5.0.1@cdcb36043654635271a94b9a6d1392de5bb323a7
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/metadata-action v5.5.1@8e5442c4ef9f78752691e2d8f8d19755c6f78e81
  • docker/build-push-action v5.4.0@ca052bb54ab0790a636c9b5f226502c73d547a25
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/build-gcp-guest-agent.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • docker/metadata-action v5.5.1@8e5442c4ef9f78752691e2d8f8d19755c6f78e81
  • docker/build-push-action v5.4.0@ca052bb54ab0790a636c9b5f226502c73d547a25
  • ubuntu 22.04
.github/workflows/build-libvirt-container.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ubuntu 22.04
.github/workflows/build-logcollector-images.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ubuntu 22.04
.github/workflows/build-os-image-scheduled.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/setup-go v5.0.1@cdcb36043654635271a94b9a6d1392de5bb323a7
  • peter-evans/create-pull-request v6.1.0@c5a7806660adbe173f04e3e038b0ccdcd758773c
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/build-os-image.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • ubuntu 22.04
.github/workflows/build-versionsapi-ci-image.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ubuntu 22.04
.github/workflows/check-links.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • lycheeverse/lychee-action v1.10.0@2b973e86fc7b1f6b36a93795fe2c9c6ae1118621
  • ubuntu 22.04
.github/workflows/codeql.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/setup-go v5.0.1@cdcb36043654635271a94b9a6d1392de5bb323a7
  • github/codeql-action v3.25.10@23acc5c183826b7a8a97bce3cecc52db901f8251
  • github/codeql-action v3.25.10@23acc5c183826b7a8a97bce3cecc52db901f8251
  • ubuntu 22.04
.github/workflows/docs-vale.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • errata-ai/vale-action 91ac403e8d26f5aa1b3feaa86ca63065936a85b6
  • ubuntu 22.04
.github/workflows/draft-release.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • sigstore/cosign-installer v3.5.0@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • slsa-framework/slsa-github-generator v2.0.0
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • softprops/action-gh-release v2.0.5@69320dbe05506a9a39fc8ae11030b214ec2d1f87
  • softprops/action-gh-release v2.0.5@69320dbe05506a9a39fc8ae11030b214ec2d1f87
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/e2e-attestationconfigapi.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ubuntu 22.04
.github/workflows/e2e-cleanup-weekly.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
.github/workflows/e2e-mini.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • azure/login v2.1.1@6c251865b4e6290e7b78be643ea2d005bc51f69a
  • ubuntu 22.04
.github/workflows/e2e-test-daily.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • azure/login v2.1.1@6c251865b4e6290e7b78be643ea2d005bc51f69a
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/e2e-test-provider-example.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • ubuntu 22.04
.github/workflows/e2e-test-release.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • google-github-actions/setup-gcloud v2.1.0@98ddc00a17442e89a24bbf282954a3b65ce6d200
.github/workflows/e2e-test-weekly.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • azure/login v2.1.1@6c251865b4e6290e7b78be643ea2d005bc51f69a
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/e2e-test.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • google-github-actions/setup-gcloud v2.1.0@98ddc00a17442e89a24bbf282954a3b65ce6d200
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/e2e-upgrade.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/e2e-windows.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ubuntu 22.04
  • windows 2022
  • ubuntu 22.04
.github/workflows/on-release.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/purge-main.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • ubuntu 22.04
.github/workflows/release.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • peter-evans/create-pull-request v6.1.0@c5a7806660adbe173f04e3e038b0ccdcd758773c
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/setup-go v5.0.1@cdcb36043654635271a94b9a6d1392de5bb323a7
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/reproducible-builds.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/download-artifact v4.1.7@65a9edc5881444af0b9093a5e628f2fe47ea3b2e
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/scorecard.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ossf/scorecard-action v2.3.3@dc50aa9510b46c811795eb24b2f1ba02a914e534
  • actions/upload-artifact v4.3.3@65462800fd760344b1a7b4382951275a0abb4808
  • github/codeql-action v3.25.10@23acc5c183826b7a8a97bce3cecc52db901f8251
  • ubuntu 22.04
.github/workflows/sync-terraform-docs.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • peter-evans/create-pull-request v6.1.0@c5a7806660adbe173f04e3e038b0ccdcd758773c
  • peter-evans/enable-pull-request-automerge v3.0.0@a660677d5469627102a1c1e11409dd063606628d
.github/workflows/test-integration.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • ubuntu 22.04
.github/workflows/test-operator-codegen.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • actions/setup-go v5.0.1@cdcb36043654635271a94b9a6d1392de5bb323a7
  • ubuntu 22.04
.github/workflows/test-tfsec.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aquasecurity/tfsec-pr-commenter-action 5b483d46fb4fd0cbe2259cf68354a3fb23aa70fe
  • ubuntu 22.04
.github/workflows/test-tidy.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
.github/workflows/test-unittest.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • marocchino/sticky-pull-request-comment v2.9.0@331f8f5b4215f0445d3c07b4967662a32a2d3e31
.github/workflows/update-rpms.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • peter-evans/create-pull-request v6.1.0@c5a7806660adbe173f04e3e038b0ccdcd758773c
  • ubuntu 22.04
.github/workflows/versionsapi.yml
  • actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • aws-actions/configure-aws-credentials v4.0.2@e3dd6a429d7300a6a4c196c26e071d42e0343502
  • ubuntu 22.04
gomod
go.mod
  • go 1.22.3
  • github.com/daniel-weisse/go-cryptsetup v0.0.0-20230705150314-d8c07bd1723c@d8c07bd1723c
  • cloud.google.com/go/compute v1.27.0
  • cloud.google.com/go/compute/metadata v0.3.0
  • cloud.google.com/go/kms v1.17.1
  • cloud.google.com/go/secretmanager v1.13.1
  • cloud.google.com/go/storage v1.42.0
  • dario.cat/mergo v1.0.0
  • github.com/Azure/azure-sdk-for-go v68.0.0+incompatible
  • github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0
  • github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.6.0
  • github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.7.0
  • github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/network/armnetwork/v5 v5.1.1
  • github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azsecrets v1.1.0
  • github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2
  • github.com/BurntSushi/toml v1.4.0
  • github.com/aws/aws-sdk-go v1.54.5
  • github.com/aws/aws-sdk-go-v2 v1.30.0
  • github.com/aws/aws-sdk-go-v2/config v1.27.21
  • github.com/aws/aws-sdk-go-v2/credentials v1.17.21
  • github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.8
  • github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.1
  • github.com/aws/aws-sdk-go-v2/service/autoscaling v1.41.1
  • github.com/aws/aws-sdk-go-v2/service/cloudfront v1.37.1
  • github.com/aws/aws-sdk-go-v2/service/ec2 v1.165.1
  • github.com/aws/aws-sdk-go-v2/service/elasticloadbalancingv2 v1.32.1
  • github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.22.1
  • github.com/aws/aws-sdk-go-v2/service/s3 v1.56.1
  • github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.31.1
  • github.com/aws/smithy-go v1.20.2
  • github.com/bazelbuild/buildtools v0.0.0-20240606140350-80f1f6802857@80f1f6802857
  • github.com/bazelbuild/rules_go v0.48.1
  • github.com/coreos/go-systemd/v22 v22.5.0
  • github.com/docker/docker v26.1.4+incompatible
  • github.com/edgelesssys/go-azguestattestation v0.0.0-20240513062303-05f8770a633d@05f8770a633d
  • github.com/edgelesssys/go-tdx-qpl v0.0.0-20240123150912-dcad3c41ec5f@dcad3c41ec5f
  • github.com/foxboron/go-uefi v0.0.0-20240522180132-205d5597883a@205d5597883a
  • github.com/fsnotify/fsnotify v1.7.0
  • github.com/go-playground/locales v0.14.1
  • github.com/go-playground/universal-translator v0.18.1
  • github.com/go-playground/validator/v10 v10.22.0
  • github.com/golang-jwt/jwt/v5 v5.2.1
  • github.com/google/go-sev-guest v0.11.1
  • github.com/google/go-tdx-guest v0.3.1
  • github.com/google/go-tpm v0.9.1
  • github.com/google/go-tpm-tools v0.4.4
  • github.com/google/uuid v1.6.0
  • github.com/googleapis/gax-go/v2 v2.12.5
  • github.com/gophercloud/gophercloud v1.12.0
  • github.com/gophercloud/utils v0.0.0-20231010081019-80377eca5d56@80377eca5d56
  • github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0
  • github.com/hashicorp/go-kms-wrapping/v2 v2.0.16
  • github.com/hashicorp/go-kms-wrapping/wrappers/awskms/v2 v2.0.9
  • github.com/hashicorp/go-kms-wrapping/wrappers/azurekeyvault/v2 v2.0.11
  • github.com/hashicorp/go-kms-wrapping/wrappers/gcpckms/v2 v2.0.12
  • github.com/hashicorp/go-version v1.7.0
  • github.com/hashicorp/hc-install v0.7.0
  • github.com/hashicorp/hcl/v2 v2.21.0
  • github.com/hashicorp/terraform-exec v0.21.0
  • github.com/hashicorp/terraform-json v0.22.1
  • github.com/hashicorp/terraform-plugin-framework v1.9.0
  • github.com/hashicorp/terraform-plugin-framework-validators v0.12.0
  • github.com/hashicorp/terraform-plugin-go v0.23.0
  • github.com/hashicorp/terraform-plugin-log v0.9.0
  • github.com/hashicorp/terraform-plugin-testing v1.8.0
  • github.com/hexops/gotextdiff v1.0.3
  • github.com/martinjungblut/go-cryptsetup v0.0.0-20220520180014-fd0874fd07a6@fd0874fd07a6
  • github.com/mattn/go-isatty v0.0.20
  • github.com/mitchellh/go-homedir v1.1.0
  • github.com/onsi/ginkgo/v2 v2.19.0
  • github.com/onsi/gomega v1.33.1
  • github.com/pkg/errors v0.9.1
  • github.com/regclient/regclient v0.6.1
  • github.com/rogpeppe/go-internal v1.12.0
  • github.com/samber/slog-multi v1.1.0
  • github.com/schollz/progressbar/v3 v3.14.4
  • github.com/secure-systems-lab/go-securesystemslib v0.8.0
  • github.com/siderolabs/talos/pkg/machinery v1.7.4
  • github.com/sigstore/rekor v1.3.6
  • github.com/sigstore/sigstore v1.8.4
  • github.com/spf13/afero v1.11.0
  • github.com/spf13/cobra v1.8.1
  • github.com/spf13/pflag v1.0.5
  • github.com/stretchr/testify v1.9.0
  • github.com/tink-crypto/tink-go/v2 v2.2.0
  • github.com/vincent-petithory/dataurl v1.0.0
  • go.etcd.io/etcd/api/v3 v3.5.14
  • go.etcd.io/etcd/client/pkg/v3 v3.5.14
  • go.etcd.io/etcd/client/v3 v3.5.14
  • go.uber.org/goleak v1.3.0
  • golang.org/x/crypto v0.24.0
  • golang.org/x/exp v0.0.0-20240613232115-7f521ea00fb8@7f521ea00fb8
  • golang.org/x/mod v0.18.0
  • golang.org/x/sys v0.21.0
  • golang.org/x/text v0.16.0
  • golang.org/x/tools v0.22.0
  • google.golang.org/api v0.185.0
  • google.golang.org/grpc v1.64.0
  • google.golang.org/protobuf v1.34.2
  • gopkg.in/yaml.v3 v3.0.1
  • helm.sh/helm v2.17.0+incompatible
  • helm.sh/helm/v3 v3.15.2
  • k8s.io/api v0.30.2
  • k8s.io/apiextensions-apiserver v0.30.2
  • k8s.io/apimachinery v0.30.2
  • k8s.io/apiserver v0.30.2
  • k8s.io/client-go v0.30.2
  • k8s.io/cluster-bootstrap v0.30.2
  • k8s.io/kubelet v0.30.2
  • k8s.io/kubernetes v1.30.2
  • k8s.io/mount-utils v0.30.2
  • k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0@fe8a2dddb1d0
  • libvirt.org/go/libvirt v1.10003.0
  • sigs.k8s.io/controller-runtime v0.18.4
  • sigs.k8s.io/yaml v1.4.0
hack/tools/go.mod
  • go 1.22
  • github.com/google/go-licenses v1.6.0
  • github.com/google/keep-sorted v0.4.0
  • github.com/katexochen/sh/v3 v3.8.0
  • golang.org/x/tools v0.22.0
  • golang.org/x/vuln v1.1.2
helm-values
dev-docs/howto/vpn/helm/values.yaml
  • ghcr.io/edgelesssys/constellation/vpn sha256:34e28ced172d04dfdadaadbefb1a53b5857cb24fb24e275fbbc537f3639a789e
hack/logcollector/internal/templates/filebeat/values.yml
hack/logcollector/internal/templates/logstash/values.yml
internal/constellation/helm/charts/aws-load-balancer-controller/values.yaml
  • public.ecr.aws/eks/aws-load-balancer-controller v2.5.3
internal/constellation/helm/charts/edgeless/csi/charts/snapshot-controller/values.yaml
  • registry.k8s.io/sig-storage/snapshot-controller v8.0.1@sha256:32b8e4254751c9935c796e6e5c07fe804250bd5032ab78f7133a00f75d504596
  • registry.k8s.io/sig-storage/snapshot-validation-webhook v8.0.1@sha256:7f058f8b3faac68d93c0abf2b97532820ec8ffff944f5919ce7039506ca24cbd
internal/constellation/helm/charts/yawol/charts/yawol-controller/values.yaml
  • ghcr.io/malt3/yawol/yawol-cloud-controller yawol-controller-0.20.0-4-g6212876@sha256:ad83538fadc5d367700f75fc71c67697338307fdd81214dfc99b4cf425b8cb30
  • ghcr.io/malt3/yawol/yawol-controller yawol-controller-0.20.0-4-g6212876@sha256:290250a851de2cf4cb6eab2d40b36724c8321b7c3c36da80fd3e2333ed6808d0
s3proxy/deploy/s3proxy/values.yaml
  • ghcr.io/edgelesssys/constellation/s3proxy v2.17.0
helmv3
internal/constellation/helm/charts/edgeless/constellation-services/Chart.yaml
internal/constellation/helm/charts/edgeless/csi/Chart.yaml
internal/constellation/helm/charts/edgeless/operators/Chart.yaml
internal/constellation/helm/charts/yawol/Chart.yaml
npm
docs/package.json
  • @cmfcmf/docusaurus-search-local ^1.1.0
  • @docusaurus/core ^2.2.0
  • @docusaurus/module-type-aliases ^2.2.0
  • @docusaurus/plugin-google-gtag ^2.4.1
  • @docusaurus/preset-classic ^2.4.1
  • @docusaurus/theme-mermaid ^2.4.1
  • @mdx-js/react ^1.6.22
  • asciinema-player ^3.5.0
  • clsx ^1.2.1
  • prism-react-renderer ^2.0.6
  • react ^17.0.2
  • react-dom ^17.0.2
  • node >=16.14
pip_requirements
.github/actions/e2e_benchmark/evaluate/requirements.txt
  • numpy ==1.26.4
  • matplotlib ==3.8.3
  • Pillow ==10.2.0
.github/workflows/aws-snp-launchmeasurements-requirements.txt
  • cffi ==1.16.0
  • cryptography ==42.0.4
  • pycparser ==2.21
  • sev-snp-measure ==0.0.9
  • types-cryptography ==3.3.23.2
regex
internal/versions/versions.go
  • ghcr.io/edgelesssys/gcp-guest-agent v20240611.1.0@sha256:e751fda68957a70c8494999115aba2ccbc1e2f31d85986b7e133cbe02187da23
  • quay.io/medik8s/node-maintenance-operator v0.15.0@sha256:8cb8dad93283268282c30e75c68f4bd76b28def4b68b563d2f9db9c74225d634
  • ghcr.io/edgelesssys/constellation/logstash-debugd v2.17.0-pre.0.20240524110423-80917921e3d6@sha256:2665a8c1cdf6f88a348a69050b3da63aeac1f606dc55b17ddc00bf4adfa67a1a
  • ghcr.io/edgelesssys/constellation/filebeat-debugd v2.17.0-pre.0.20240524110423-80917921e3d6@sha256:a58db8fef0740e0263d1c407f43f2fa05fdeed200b32ab58d32fb11873477231
  • ghcr.io/edgelesssys/constellation/metricbeat-debugd v2.17.0-pre.0.20240524110423-80917921e3d6@sha256:2a384e4120ad0e46e1841205fde75f9c726c14c31be0a88bf08ae14d8c4d6069
  • registry.k8s.io/provider-aws/cloud-controller-manager v1.28.6@sha256:bb42961c336c2dbc736c1354b01208523962b4f4be4ebd582f697391766d510e
  • mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager v1.28.9@sha256:7abf813ad41b8f1ed91f50bfefb6f285b367664c57758af0b5a623b65f55b34b
  • mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager v1.28.9@sha256:65285c13cc3eced1005a1c6c5f727570d781ac25f421a9e5cf169de8d7e1d6a9
  • ghcr.io/edgelesssys/cloud-provider-gcp v28.10.0@sha256:f3b6fa7faea27b4a303c91b3bc7ee192b050e21e27579e9f3da90ae4ba38e626
  • docker.io/k8scloudprovider/openstack-cloud-controller-manager v1.26.4@sha256:05e846fb13481b6dbe4a1e50491feb219e8f5101af6cf662a086115735624db0
  • registry.k8s.io/autoscaling/cluster-autoscaler v1.28.5@sha256:d82acf2ae3287227b979fa7068dae7c2db96de22c96295d2e89029065e895bca
  • registry.k8s.io/provider-aws/cloud-controller-manager v1.29.3@sha256:26a61ab55d8be6365348b78c3cf691276a7b078c58dd789729704fb56bcaac8c
  • mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager v1.29.7@sha256:01f3e57f0dfed05a940face4f1a543eed613e3bfbfb1983997f75a4a11a6e0bb
  • mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager v1.29.7@sha256:79f1ebd0f462713dd20b9630ae177ab1b1dc14f0faae7c8fd4837e8d494d11b9
  • ghcr.io/edgelesssys/cloud-provider-gcp v29.5.0@sha256:4b17e16f9b5dfa20c9ff62cb382c3f754b70ed36be7f85cc3f46213dae21ec91
  • docker.io/k8scloudprovider/openstack-cloud-controller-manager v1.26.4@sha256:05e846fb13481b6dbe4a1e50491feb219e8f5101af6cf662a086115735624db0
  • registry.k8s.io/autoscaling/cluster-autoscaler v1.29.3@sha256:ea973cf727ff98a0c8c8f770a0787c26a86604182565ce180a5665936f3b38bc
  • registry.k8s.io/provider-aws/cloud-controller-manager v1.30.1@sha256:ee0e0c0de56e7dace71e2b4a0e45dcdae84e325c78f72c5495b109976fb3362c
  • mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager v1.30.3@sha256:0c74b1d476f30bcd4c68d1bb2cce6957f9dfeae529b7260f21b0059e0a6b4450
  • mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager v1.30.3@sha256:7605738917b4f9c55dba985073724de359d97f02688f62e65b4173491b2697ce
  • ghcr.io/edgelesssys/cloud-provider-gcp v30.0.0@sha256:529382b3a16cee9d61d4cfd9b8ba74fff51856ae8cbaf1825c075112229284d9
  • docker.io/k8scloudprovider/openstack-cloud-controller-manager v1.26.4@sha256:05e846fb13481b6dbe4a1e50491feb219e8f5101af6cf662a086115735624db0
  • registry.k8s.io/autoscaling/cluster-autoscaler v1.30.1@sha256:21a0003fa059f679631a301ae09d9369e1cf6a1c063b78978844ccd494dab38a
internal/versions/versions.go
  • kubernetes/kubernetes v1.28.11
  • kubernetes/kubernetes v1.28.11
  • kubernetes/kubernetes v1.28.11
  • kubernetes/kubernetes v1.29.6
  • kubernetes/kubernetes v1.29.6
  • kubernetes/kubernetes v1.29.6
  • kubernetes/kubernetes v1.30.2
  • kubernetes/kubernetes v1.30.2
  • kubernetes/kubernetes v1.30.2
  • kubernetes/kubernetes v1.28.11
  • kubernetes/kubernetes v1.29.6
  • kubernetes/kubernetes v1.30.2
  • kubernetes/kubernetes v1.28.11
  • kubernetes/kubernetes v1.29.6
  • kubernetes/kubernetes v1.30.2
internal/versions/versions.go
  • kubernetes-sigs/cri-tools v1.28.0
  • kubernetes-sigs/cri-tools v1.29.0
  • kubernetes-sigs/cri-tools v1.30.0
internal/versions/versions.go
  • containernetworking/plugins v1.4.0
  • containernetworking/plugins v1.4.0
  • containernetworking/plugins v1.4.0
.github/actions/e2e_s3proxy/action.yml
  • ghcr.io/edgelesssys/mint v2.0.0@sha256:cf82f029ca77fd4ade4fb36f19945f44e58b1d03c1acb930d95ae7ec75a25c22
terraform
dev-docs/howto/vpn/on-prem-terraform/main.tf
  • azurerm 3.92.0
  • random 3.6.0
dev-docs/miniconstellation/azure-terraform/main.tf
  • azurerm 3.92.0
  • cloudinit 2.3.3
  • random 3.6.0
  • tls 4.0.5
e2e/miniconstellation/main.tf
  • azurerm 3.92.0
  • cloudinit 2.3.3
  • random 3.6.0
  • tls 4.0.5
terraform-provider-constellation/examples/full/aws/main.tf
  • constellation 0.0.0
  • random 3.6.0
  • aws_iam undefined
  • aws_infrastructure undefined
terraform-provider-constellation/examples/full/azure/main.tf
  • constellation 0.0.0
  • random 3.6.0
  • azure_iam undefined
  • azure_infrastructure undefined
terraform-provider-constellation/examples/full/gcp/main.tf
  • constellation 0.0.0
  • random 3.6.0
  • gcp_iam undefined
  • gcp_infrastructure undefined
terraform-provider-constellation/examples/full/stackit/main.tf
  • constellation 0.0.0
  • random 3.6.0
  • stackit_infrastructure undefined
terraform-provider-constellation/examples/provider/provider.tf
terraform/infrastructure/aws/main.tf
  • aws 5.37.0
  • random 3.6.0
terraform/infrastructure/aws/modules/instance_group/main.tf
  • aws 5.37.0
  • random 3.6.0
terraform/infrastructure/aws/modules/jump_host/main.tf
  • aws 5.37.0
terraform/infrastructure/aws/modules/load_balancer_target/main.tf
  • aws 5.37.0
terraform/infrastructure/aws/modules/public_private_subnet/main.tf
  • aws 5.37.0
terraform/infrastructure/azure/main.tf
  • azurerm 3.92.0
  • random 3.6.0
terraform/infrastructure/azure/modules/load_balancer_backend/main.tf
  • azurerm 3.92.0
terraform/infrastructure/azure/modules/scale_set/main.tf
  • azurerm 3.92.0
  • random 3.6.0
terraform/infrastructure/gcp/main.tf
  • google 5.23.0
  • google-beta 5.23.0
  • random 3.6.0
terraform/infrastructure/gcp/modules/instance_group/main.tf
  • google 5.23.0
  • google-beta 5.23.0
  • random 3.6.0
terraform/infrastructure/gcp/modules/internal_load_balancer/main.tf
  • google 5.23.0
terraform/infrastructure/gcp/modules/jump_host/main.tf
  • google 5.23.0
terraform/infrastructure/gcp/modules/loadbalancer/main.tf
  • google 5.23.0
terraform/infrastructure/iam/aws/main.tf
  • aws 5.37.0
terraform/infrastructure/iam/azure/main.tf
  • azuread 2.43.0
  • azurerm 3.92.0
terraform/infrastructure/iam/gcp/main.tf
  • google 5.23.0
terraform/infrastructure/openstack/main.tf
  • openstack 1.54.1
  • random 3.6.0
  • stackit 0.17.0
terraform/infrastructure/openstack/modules/instance_group/main.tf
  • openstack 1.54.1
terraform/infrastructure/openstack/modules/loadbalancer/main.tf
  • openstack 1.54.1
terraform/infrastructure/openstack/modules/stackit_loadbalancer/main.tf
  • stackit 0.17.0
terraform/infrastructure/qemu/main.tf
  • docker 3.0.2
  • libvirt 0.7.1
terraform/infrastructure/qemu/modules/instance_group/main.tf
  • libvirt 0.7.1
  • random 3.6.0
terraform/legacy-module/aws-constellation/main.tf
terraform/legacy-module/azure-constellation/main.tf
terraform/legacy-module/gcp-constellation/main.tf

  • Check this box to trigger a request for Renovate to run again on this repository

Failed to create cluster behind proxy.

Issue description

I am following first steps (local) on TDX enabled kernel, but constellation create fails.

constellation version:

Version:        v2.13.0 (Enterprise build; see documentation for license agreement)
GitCommit:      ea1fe82682889056d1b5ede058927ed5960ccb01
GitTreeState:   clean
BuildDate:      2023-11-14T08:51:53
GoVersion:      go1.21.4
Compiler:       bazel/gc
Platform:       linux/amd64

os:
Ubuntu 22.04.3 LTS
kernel:
5.19.17

Steps to reproduce the behavior

Run constellation mini up --debug

Creating cluster in QEMU
Error: creating cluster: creating terraform variables: fetching image reference: sending request for versionsapi.ImageInfo: Get "https://cdn.confidential.cloud/constellation/v2/ref/-/stream/stable/v2.13.0/image/info.json": context canceled

I'm using my company machine, it connects to internet through http proxy, and I configured the proxy correctly before running the command, will constellation client tool uses the proxy?

export https_proxy=http://proxy-host:proxy-port
export http_proxy=http://proxy-host:proxy-port

but I still got the error, here is the details with debug:

2023-11-23T06:40:32Z    DEBUG   cmd/miniup_linux_amd64.go:35    Checked arch and os
2023-11-23T06:40:32Z    DEBUG   cmd/miniup_linux_amd64.go:40    Checked that /dev/kvm exists
2023-11-23T06:40:32Z    DEBUG   cmd/miniup_linux_amd64.go:48    Checked CPU cores - there are 192
2023-11-23T06:40:32Z    DEBUG   cmd/miniup_linux_amd64.go:66    Scanned for available memory
2023-11-23T06:40:32Z    DEBUG   cmd/miniup_linux_amd64.go:74    Checked available memory, you have 1006GB available
2023-11-23T06:40:32Z    DEBUG   cmd/miniup_linux_amd64.go:84    Checked for free space available, you have 117GB available
A config file already exists in the configured workspace.
2023-11-23T06:40:40Z    DEBUG   cmd/miniup.go:187       Creating mini cluster
Error: creating cluster: creating terraform variables: fetching image reference: sending request for versionsapi.ImageInfo: Get "https://cdn.confidential.cloud/constellation/v2/ref/-/stream/stable/v2.13.0/image/info.json": dial tcp 13.225.103.76:443: connect: connection timed out

However i can download the json file with wget

Proxy request sent, awaiting response... 200 OK
Length: 1937 (1.9K) [application/octet-stream]
Saving to: ‘info.json’

info.json                          100%[================================================================>]   1.89K  --.-KB/s    in 0s

2023-11-23 07:21:11 (854 MB/s) - ‘info.json’ saved [1937/1937]

iptables:

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N LIBVIRT_FWI
-N LIBVIRT_FWO
-N LIBVIRT_FWX
-N LIBVIRT_INP
-N LIBVIRT_OUT
-A INPUT -j LIBVIRT_INP
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -j LIBVIRT_FWX
-A FORWARD -j LIBVIRT_FWI
-A FORWARD -j LIBVIRT_FWO
-A OUTPUT -j LIBVIRT_OUT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A LIBVIRT_FWI -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWO -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT

Version

v2.13.0

Constellation Config

auto generated

Support new AWS image region: eu-west-1

Use case

According to the doc, Constellation OS images are not available in region eu-west-1. Could you please make it available?

Describe your solution

Make Constellation images available in region eu-west-1

Additional context

Need to deploy in this region

Dashboard/analysis for state of confidentiality

Use case

making state of confidentiality really transparent / have a good chance to optimize it

inspired from

Describe your solution

Since confidentiality is the main selling point of constellation,
are there already any plans to make this visible via something like
a Dashboard or analyzing script showing the state of confidentiality?

depending on

  • Constellations version
  • Config
  • Hardware type
  • supported hardware features (CPU/Platform)
  • Cloud providers features (subset of Hardware features)
  • kernel versions used
  • Firmware version (Patches)
  • ...

This is not only about the current state but also to give concrete advice, how its possible to optimize confidentiality, e.g.

  • move to Azure
  • use instances with newer hardware gens
  • update Constellation
  • change config to
    ...

Would you be willing to implement this feature?

  • Yes, I could contribute this feature.

Document Terraform support

Document how Constellation can be used with resource provisioning via Terraform instead of constellation create.

Local version support and customization options

Hi,

My case is, i'm looking of a secure setup for k8s like constellation provides only i need it to wrap around my virtual cluster inside my actual cluster, i don't any cloud provider for this and i'm not sure the machine supports the needed encryption features you rely on from the processor. also from what i understand you only support a local cluster version for testing and only the cloud versions are production ready with only 3 providers supported, correct?

So my question is, would it be possible to have a local less secure version of what you offer using some configuration or does my use case requires a different product\solution? if it's the latter then maybe you can suggest something? if not then i thank you for you time anyway

SEV-SNP Machines on AWS potentially hand out broken attestation reports

Issue Description

We recently observed virtual machines on AWS with AMD SEV-SNP enabled to not reliably contain a (functioning) SEV-SNP device. Machines where this is the case will not be able to join or bootstrap a Constellation cluster, as they are not able to hand out a valid attestation report. Therefore, the issue is not impacting Constellation's security guarantees.

The issue may show different symptoms, depending on which part of a Constellation cluster the broken VM is.

  • When a machine trying to bootstrap the Constellation cluster is broken, the CLI will show an error stating that an invalid attestation report has been supplied when trying to apply the initial Constellation cluster configuration on it.
  • When a machine trying to join a Constellation cluster, be that within a cluster in its bootstrapping process or a cluster being upgraded, the machine will be rejected by Constellation's join-service, as it is not able to supply a valid attestation report. When bootstrapping a cluster, this will lead to the node simply not being able to join the cluster. On an upgrade, where Kubernetes operators manage the VM lifecycle, this rejection will lead to nodes being re-provisioned until a VM with a working device is received.

The issue has already been reported to the AWS team and they are working on fixing it.

Possible Workarounds

The issue is not present on all machines, so it is still possible to create a functioning Constellation cluster in most cases. If you should run into the issue on a machine, the following workarounds can help.

  • Try to provision another VM. It is recommended to provision VMs on the same region until you get a working one, and then terminate all non-working VMs to not receive the same machine again when re-provisioning. The same can be achieved by provisioning a VM in another region, but as AWS does not provide SEV-SNP machines on all regions, you might run into availability issues, depending on which region is used.
    To do so, you can navigate to the constellation-terraform directory in your Constellation workspace (or the directory containing the infrastructure configuration, if not using the Constellation CLI) to destroy and re-apply the instance group, which contains the VMs, and apply the Constellation configuration again.
    cd constellation-terraform
    terraform destroy
    terraform apply
    constellation apply
  • If the deployment is non-production, you can also use AWS NitroTPM attestation instead of SEV-SNP. To do so, remove the attestation.awsSEVSNP block from constellation-conf.yaml and insert the following the following block instead:
    awsNitroTPM:
      measurements: {}
    After that, destroy the cluster, fetch the measurements for machines with NitroTPM attestation, and recreate the cluster.
    constellation terminate
    constellation config fetch-measurements
    constellation create
    constellation apply

Command: [constellation version] returns 0.0.0 instead of 2.1.0

Issue description

The latest constellation cli release (2.1.0) has a bug when displaying the version of the constellation binary.

When running constellation version the following version is returned: 0.0.0

Refer to screenshot below:
Screen Shot 2022-10-24 at 11 26 09 AM

To reproduce

Steps to reproduce the behavior:

  1. Download and install latest constellation binary for macOS (intel) (version 2.1.0 at the time of writing this)
  2. Execute the following command: constellation version
  3. Notice how the displayed version on STD out is 0.0.0 instead of 2.1.0

Bare Metal Cluster Support

Hi,

I love the look of this project and am considering trying it out on a cluster for myself.
However I was wondering if Constellation currently has support for Bare Metal clusters? I'd like to bootstrap it to replace some microk8s clusters. Is something like this in the pipeline?

Thanks,
Nick

provide crossplane composition for a constellation cluster setup

Use case

more and more people adapt Crossplane to setup and manage "everything" resp. make the specialized knowledge of how to set up the needed resources in a simplified way available to every one who could benefit from this.

=> Would be pretty handy to have a Crossplane composition to setup - not only a standard kubernetes cluster in any cloud- but also a confidential one based on constellation (in the clouds with supported hardware)

her is just a quick video on what is possible this way:
Create And Manage GitOps-Ready Kubernetes Clusters With Crossplane
https://www.youtube.com/watch?v=AVHyltqgmSU
and the doc to compositions:
see https://docs.crossplane.io/latest/concepts/compositions/

Describe your solution

No response

Would you be willing to implement this feature?

  • Yes, I could contribute this feature.

Leverage external utility(CC Trusted API) to ease the process of confidential environment evidence fetching/verifying

Use case

Constellation, working as the typical confidential cluster that could run on either cloud environment or local machine across platforms, need to fetch measurements/evidence against different type of TEEs/TPM to prove its trustworthiness. Once a new confidential computing environment get supported in CSP's environment or a new technology revealed to the market, Constellation must make addition to the current code space to enable the evidence fetching or replaying function for the platform.

In the meanwhile, different platform or confidential computing technologies varies in use, which requires the Constellation developers to have knowledge and understandings on different architectures. Maintaining these code seems another burden for the project, as efforts are required once there's change in API or Specifications of the underlying technologies.

Describe your solution

Instead of maintaining the code within Constellation, it seems more efficient to leverage an utility which provides the capability for application to do evidence fetching or replaying using a set of simple APIs across all kinds of platforms.

CC Trusted API is a nice approach to streamline the effort that Constellation requires on this side. As a project that aims to collect confidential primitives (i.e., measurement, event log, quote) for zero-trust design, it provides the capability to fulfill this need using some vendor agnostic and TCG compliant APIs in multiple deployment environments (e.g. firmware/VM/cloud native clusters).

By leveraging these APIs, Constellation can perform with evidence fetching on different platforms through a unified API and requires little effort on maintenance of code related to platform features.

Would you be willing to implement this feature?

  • Yes, I could contribute this feature.

Darwin arm64 constellation fails to run on Ventura 13.4

Issue description

The Darwin ARM build for v2.8.0, fails to run with a message

“constellation” can’t be opened because Apple cannot check it for malicious software.
This software needs to be updated. Contact the developer for more information.

To reproduce

Steps to reproduce the behavior:

  1. Following installation instructions
  2. Run curl -LO https://github.com/edgelesssys/constellation/releases/latest/download/constellation-darwin-arm64
  3. The signature verification step fails (see below in additional info)
  4. Run the installation sudo install ./constellation-darwin-arm64 /usr/local/bin/constellation
  5. Run constellation

Environment

  • md5sum 8020bd3b379454336718c6f68030cc81 constellation-darwin-arm64
  • constellation version: not available
  • constellation-conf.yaml: not applicable
  • VM type used to run Constellation: not applicable
  • uname -a gives Darwin myhostname 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:52:24 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T6000 arm64

Expected behavior

Expected the release build to be properly codesigned

Additional info / screenshot

Cosign step failure:

cosign verify-blob --key https://edgeless.systems/es.pub --signature constellation-darwin-arm64.sig constellation-darwin-arm64

Error: searching log query: [POST /api/v1/log/entries/retrieve][400] searchLogQueryBadRequest  &{Code:400 Message:verifying signature: invalid signature when validating ASN.1 encoded signature}
main.go:74: error during command execution: searching log query: [POST /api/v1/log/entries/retrieve][400] searchLogQueryBadRequest  &{Code:400 Message:verifying signature: invalid signature when validating ASN.1 encoded signature}
image

[CLI] Introduce debug flag for more logging

Use case

In case of an error during create/init/terminate as much information as possible is helpful. Currently developers have to add debug prints or use a debugger to get more information of what went wrong.

Describe your solution

Add logging statements / prints that are only called if a dedicated debug flag is set. These logging statements should include information on what actions are performed on which objects. The more information, the better.

You can use this logger implementation as architecture blueprint.

Additional context

[CLI] Activity indicator for init command

Use case

When interacting with cloud provider APIs there are actions that take multiple minutes. In those cases the user might become uncertain if the program is still working. Some kind of animation, like a spinner, would improve UX.

Describe your solution

Implement a small animation that shows that the program is still running, while waiting for the init response to come back.

Feel free to also implement this for other commands like create or terminate.

Additional context

`constellation upgrade apply` fails if `--workspace/-C` flag is set

Issue description

When using the workspace flag (--workspace or -C) to run constellation upgrade apply in a directory other than the current working directory, the command fails when applying Helm chart upgrades with the following error:

constellation upgrade apply -C <workspace>
# ...
Error: upgrading services: reading master secret: open <workspace-path>/constellation-mastersecret.json: no such file or directory

Workaround

Change into the workspace manually and run the command without setting the workspace flag:

pushd <workspace>
constellation upgrade apply
popd

Fix

Fixed on main after #2249 is merged.
Fixed in v2.11, or a patch release (if any) of v2.10

docs: AWS provides OVMF sources for SEV-SNP instances

Issue description

The documentation page https://docs.edgeless.systems/constellation/overview/clouds calls out that in AWS EC2, instance firmware is not reviewable. For SEV-SNP instances, this is incorrect. We publish the sources as well as reproducibly built binaries and programmatic (nix based) build instruction at https://github.com/aws/uefi. Please change the table accordingly :).

As a side comment, you can use this binary in combination with https://github.com/virtee/sev-snp-measure to generate launch digests for SEV-SNP instances.

Support cluster upgrade on aws

Use case

According to the doc, cluster upgrade is not yet supported on AWS. This is a show-stopper for production usage.

Describe your solution

Please implement cluster upgrade feature for clusters deployed on aws

Additional context

Nothing more to mention!

Add additional terraform variables for libvirt and qemu or allow editing of terraform before `constellation create`.

Use case

I wish to configure the constellation terraform to use my own libvirt storage pool and network as constellation create doesn't run to completion with the defaults:

Error: error creating libvirt network: internal error: Network is already in use by interface virbr2

  with libvirt_network.constellation,
  on main.tf line 111, in resource "libvirt_network" "constellation":
 111: resource "libvirt_network" "constellation" {

if I change the terraform to use the default libvirt network, then there is a similar problem with the libvirt storage pool sharing the location of the default pool.

I would like to modify the terraform to just use the default network and storage pool, or provide it through configuration, but constellation create does not support that as far as I can tell.

Describe your solution

Allow editing the terraform with:
constellation create --only-terraform
which populates constellation-terraform and then
constellation create to apply the terraform.

OR

provide options to use an existing libvirt network and storage pool.

Would you be willing to implement this feature?

  • Yes, I could contribute this feature.

Automated Kubernetes version updates

Microservice upgrades via upgrade command

Constellation microservices should be upgradable in an existing cluster using the upgrade command.

Predecessor: #589

  • Move/Refactor bootstrapper helm client to internal
  • Implement Helm chart upgrade for CLI
  • Ensure CLI upgrade does not overwrite newer releases
  • Document helm upgrade cli cmd

Extend e2e test coverage

E2E coverage is currently lacking some important features.

  • Constellation verify
  • Loadbalancer deployment
  • Constellation recover ( #845 )
  • MiniConstellation
  • QEMU (#1490 )
  • Node Image Upgrade (#1469 )

spiffe/spire to proof: running on aws and in memory encrypted context

Use case

Originally it is about things running within kubernetes, but I think it's worth to share - maybe this idea can somehow be adapted for hardening constellation:

We can now assert two statements are true, our agent runs:

  • On an AWS EC2 machine
  • In a memory encrypted context

https://control-plane.io/posts/spiffe-confidential-computing-august-2023/

spiffe intros:
https://spiffe.io/
https://github.com/spiffe/spire
https://control-plane.io/posts/spiffe-keystone-of-cloud-native/

and the spiffe plugin:
RFC: SEV SNP Node Attestation Plugin
spiffe/spire#4469

Describe your solution

No response

Would you be willing to implement this feature?

  • Yes, I could contribute this feature.

Can't have worker nodes in miniconstellation cluster following tutorial (Failed to get IP in VPC)

Issue description

after running constellation mini up, i still dont have any worker node available, this is the logs i get:

kubectl logs -n kube-system daemonsets/join-service -f

{"level":"INFO","ts":"2023-12-25T15:10:49Z","caller":"cmd/main.go:57","msg":"Constellation Node Join Service","version":"v2.14.0","cloudProvider":"QEMU","attestationVariant":"qemu-vtpm"}
{"level":"INFO","ts":"2023-12-25T15:10:49Z","logger":"validator","caller":"watcher/validator.go:72","msg":"Updating expected measurements"}
{"level":"FATAL","ts":"2023-12-25T15:11:19Z","caller":"cmd/main.go:90","msg":"Failed to get IP in VPC","error":"Get \"http://10.42.0.1:8080/self\": context deadline exceeded"}

then i tried to check the pods and i found out that the join service crash looped

kube-system   join-service-mkxdq  0/1     CrashLoopBackOff   22 (4m33s ago)   105m

so there are no worker nodes, only the control plane

so i deleted the join service pod and it restarted successfully however still no worker nodes joined

and here is the list of events kubectlt get events -A

NAMESPACE     LAST SEEN   TYPE      REASON             OBJECT                                 MESSAGE
kube-system   3m34s       Warning   FailedScheduling   pod/cilium-operator-7f8f557b9d-fqnl2   0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports..
kube-system   3m34s       Warning   FailedScheduling   pod/coredns-8956f444c-x26r2            0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules..
kube-system   39m         Normal    Pulled             pod/join-service-lxdzj                 Container image "ghcr.io/edgelesssys/constellation/join-service:v2.14.0@sha256:c5cb0644b6c0519d0db1fd1e0986083e84b16c7bc90812669a7dc89aeb89ba4c" already present on machine
kube-system   4m28s       Warning   BackOff            pod/join-service-lxdzj                 Back-off restarting failed container join-service in pod join-service-lxdzj_kube-system(bd1cd3a2-f203-4dca-9573-143cff075e51)

and here is the list of all pods:

NAMESPACE     NAME                                                           READY   STATUS             RESTARTS       AGE
kube-system   cert-manager-6dfc87675f-jp95c                                  1/1     Running            0              76m
kube-system   cert-manager-cainjector-79dd56cf68-874bb                       1/1     Running            0              76m
kube-system   cert-manager-webhook-7797df8bdb-cqfdd                          1/1     Running            0              76m
kube-system   cilium-operator-7f8f557b9d-fqnl2                               0/1     Pending            0              76m
kube-system   cilium-operator-7f8f557b9d-jtkdd                               1/1     Running            0              76m
kube-system   cilium-z8xzz                                                   1/1     Running            0              76m
kube-system   constellation-operator-controller-manager-85c66946c4-tbrbv     2/2     Running            0              70m
kube-system   coredns-8956f444c-5lwwf                                        1/1     Running            0              76m
kube-system   coredns-8956f444c-x26r2                                        0/1     Pending            0              76m
kube-system   etcd-control-plane-0                                           1/1     Running            0              76m
kube-system   join-service-lxdzj                                             0/1     CrashLoopBackOff   17 (69s ago)   76m
kube-system   key-service-5ntc8                                              1/1     Running            0              76m
kube-system   konnectivity-agent-qhbcb                                       1/1     Running            0              73m
kube-system   kube-apiserver-control-plane-0                                 1/1     Running            0              76m
kube-system   kube-controller-manager-control-plane-0                        1/1     Running            0              76m
kube-system   kube-scheduler-control-plane-0                                 1/1     Running            0              76m
kube-system   node-maintenance-operator-controller-manager-5b6dcf6d8-dn422   1/1     Running            0              70m
kube-system   verification-service-z4hp2                                     1/1     Running            0              76m

two of them is pending which are cilium-operator and coredns, dont know if that is relevant or not

intel-vx is enabled in bios and all pre-requisites are met

kubectl get nodes output:

NAME              STATUS   ROLES           AGE   VERSION
control-plane-0   Ready    control-plane   83m   v1.27.8

ps: same result with qemu

Steps to reproduce the behavior

constellation mini up in a new directory

Version

Version: v2.14.0 (Enterprise build; see documentation for license agreement)
GitCommit: facaa6a
GitTreeState: clean
BuildDate: 2023-12-19T07:37:24
GoVersion: go1.21.5
Compiler: bazel/gc
Platform: linux/amd64

Constellation Config

No response

Can't run "Constellation mini"

Hello everyone,
I've been trying to run the Mini Constellation on Ubuntu 18.0 and I'm getting the error:
Error: creating cluster: fetching image reference: fetching image reference: Get "https://cdn.confidential.cloud/constellation/v1/ref/-/stream/stable/image/v2.3.0/info.json": dial tcp: lookup cdn.confidential.cloud on 127.0.0.53:53: dial udp 127.0.0.53:53: connect: invalid argument

  • I set my FORWARD policy to ACCEPT in iptables
  • I tried to turn of the firewall
  • There is 0 rules inside my ufw-reject-forward chain in iptables [even that the chain name is still in the table]
  • restarted the firewall.
  • I rebooted my machine as a final solution..

However, I'm still having the problem and Constellation can't fetch the image from the remote server.

Note: Here is the content of my iptables:
-P INPUT DROP
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N ufw-after-forward
-N ufw-after-input
-N ufw-after-logging-forward
-N ufw-after-logging-input
-N ufw-after-logging-output
-N ufw-after-output
-N ufw-before-forward
-N ufw-before-input
-N ufw-before-logging-forward
-N ufw-before-logging-input
-N ufw-before-logging-output
-N ufw-before-output
-N ufw-logging-allow
-N ufw-logging-deny
-N ufw-not-local
-N ufw-reject-forward
-N ufw-reject-input
-N ufw-reject-output
-N ufw-skip-to-policy-forward
-N ufw-skip-to-policy-input
-N ufw-skip-to-policy-output
-N ufw-track-forward
-N ufw-track-input
-N ufw-track-output
-N ufw-user-forward
-N ufw-user-input
-N ufw-user-limit
-N ufw-user-limit-accept
-N ufw-user-logging-forward
-N ufw-user-logging-input
-N ufw-user-logging-output
-N ufw-user-output
-A INPUT -j ufw-before-logging-input
-A INPUT -j ufw-before-input
-A INPUT -j ufw-after-input
-A INPUT -j ufw-after-logging-input
-A INPUT -j ufw-reject-input
-A INPUT -j ufw-track-input
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-684007d48d62 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-684007d48d62 -j DOCKER
-A FORWARD -i br-684007d48d62 ! -o br-684007d48d62 -j ACCEPT
-A FORWARD -i br-684007d48d62 -o br-684007d48d62 -j ACCEPT
-A FORWARD -o br-1434c250795f -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-1434c250795f -j DOCKER
-A FORWARD -i br-1434c250795f ! -o br-1434c250795f -j ACCEPT
-A FORWARD -i br-1434c250795f -o br-1434c250795f -j ACCEPT
-A FORWARD -o br-0a34a52fe39c -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-0a34a52fe39c -j DOCKER
-A FORWARD -i br-0a34a52fe39c ! -o br-0a34a52fe39c -j ACCEPT
-A FORWARD -i br-0a34a52fe39c -o br-0a34a52fe39c -j ACCEPT
-A FORWARD -o br-f45042a12d9f -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-f45042a12d9f -j DOCKER
-A FORWARD -i br-f45042a12d9f ! -o br-f45042a12d9f -j ACCEPT
-A FORWARD -i br-f45042a12d9f -o br-f45042a12d9f -j ACCEPT
-A FORWARD -o br-788a928d0b87 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-788a928d0b87 -j DOCKER
-A FORWARD -i br-788a928d0b87 ! -o br-788a928d0b87 -j ACCEPT
-A FORWARD -i br-788a928d0b87 -o br-788a928d0b87 -j ACCEPT
-A FORWARD -o br-77ec4d63a6d7 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-77ec4d63a6d7 -j DOCKER
-A FORWARD -i br-77ec4d63a6d7 ! -o br-77ec4d63a6d7 -j ACCEPT
-A FORWARD -i br-77ec4d63a6d7 -o br-77ec4d63a6d7 -j ACCEPT
-A FORWARD -j ufw-before-logging-forward
-A FORWARD -j ufw-before-forward
-A FORWARD -j ufw-after-forward
-A FORWARD -j ufw-after-logging-forward
-A FORWARD -j ufw-track-forward
-A OUTPUT -j ufw-before-logging-output
-A OUTPUT -j ufw-before-output
-A OUTPUT -j ufw-after-output
-A OUTPUT -j ufw-after-logging-output
-A OUTPUT -j ufw-reject-output
-A OUTPUT -j ufw-track-output
-A DOCKER -d 172.24.24.3/32 ! -i br-0a34a52fe39c -o br-0a34a52fe39c -p tcp -m tcp --dport 2000 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-684007d48d62 ! -o br-684007d48d62 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-1434c250795f ! -o br-1434c250795f -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-0a34a52fe39c ! -o br-0a34a52fe39c -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-f45042a12d9f ! -o br-f45042a12d9f -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-788a928d0b87 ! -o br-788a928d0b87 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-77ec4d63a6d7 ! -o br-77ec4d63a6d7 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-684007d48d62 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-1434c250795f -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-0a34a52fe39c -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-f45042a12d9f -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-788a928d0b87 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-77ec4d63a6d7 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A ufw-after-input -p udp -m udp --dport 137 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 138 -j ufw-skip-to-policy-input
-A ufw-after-input -p tcp -m tcp --dport 139 -j ufw-skip-to-policy-input
-A ufw-after-input -p tcp -m tcp --dport 445 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 67 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 68 -j ufw-skip-to-policy-input
-A ufw-after-input -m addrtype --dst-type BROADCAST -j ufw-skip-to-policy-input
-A ufw-after-logging-input -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW BLOCK] "
-A ufw-before-forward -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A ufw-before-forward -j ufw-user-forward
-A ufw-before-input -i lo -j ACCEPT
-A ufw-before-input -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-input -m conntrack --ctstate INVALID -j ufw-logging-deny
-A ufw-before-input -m conntrack --ctstate INVALID -j DROP
-A ufw-before-input -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A ufw-before-input -p udp -m udp --sport 67 --dport 68 -j ACCEPT
-A ufw-before-input -j ufw-not-local
-A ufw-before-input -d 224.0.0.251/32 -p udp -m udp --dport 5353 -j ACCEPT
-A ufw-before-input -d 239.255.255.250/32 -p udp -m udp --dport 1900 -j ACCEPT
-A ufw-before-input -j ufw-user-input
-A ufw-before-output -o lo -j ACCEPT
-A ufw-before-output -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-output -j ufw-user-output
-A ufw-logging-allow -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW ALLOW] "
-A ufw-logging-deny -m conntrack --ctstate INVALID -m limit --limit 3/min --limit-burst 10 -j RETURN
-A ufw-logging-deny -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW BLOCK] "
-A ufw-not-local -m addrtype --dst-type LOCAL -j RETURN
-A ufw-not-local -m addrtype --dst-type MULTICAST -j RETURN
-A ufw-not-local -m addrtype --dst-type BROADCAST -j RETURN
-A ufw-not-local -m limit --limit 3/min --limit-burst 10 -j ufw-logging-deny
-A ufw-not-local -j DROP
-A ufw-skip-to-policy-forward -j ACCEPT
-A ufw-skip-to-policy-input -j DROP
-A ufw-skip-to-policy-output -j ACCEPT
-A ufw-track-forward -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-forward -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-output -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-output -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-user-input -p tcp -m tcp --dport 22 -j ACCEPT
-A ufw-user-input -p tcp -m tcp --dport 9000 -j ACCEPT
-A ufw-user-input -p tcp -m tcp --dport 8889 -j ACCEPT
-A ufw-user-input -p tcp -m tcp --dport 22 -j ACCEPT
-A ufw-user-input -p udp -m udp --dport 22 -j ACCEPT
-A ufw-user-limit -m limit --limit 3/min -j LOG --log-prefix "[UFW LIMIT BLOCK] "
-A ufw-user-limit -j REJECT --reject-with icmp-port-unreachable
-A ufw-user-limit-accept -j ACCEPT

I would appreciate your help with this!
Thanks a lot.

Constellation cannot terminate IP resource on AWS

Issue description

When terminating a Constellation on AWS the constellation terminate outputs an error and does not fully clean up the Constellation. Only when running constellation terminate a second time, all resources are cleaned up.

To reproduce

Steps to reproduce the behavior:

  1. Create a Constellation on AWS
  2. Run constellation terminate
 euler@work:~/projects/constellation/test$ ./constellation terminate
You are about to terminate a Constellation cluster.
All of its associated resources will be DESTROYED.
This action is irreversible and ALL DATA WILL BE LOST.
Do you want to continue? [y/n]: y
Terminating
Error: terminating Constellation cluster: exit status 1
Error: deleting EC2 EIP (eipalloc-005f6369d2d588052): disassociating: AuthFailure: You do not have permission to access the specified resource.
    status code: 400, request id: fdc85569-0a89-4900-9631-bec0685479bc 

Environment

  • constellation version: 2.8.0

Expected behavior

The Constellation should be fully terminated without error.

Additional info / screenshot

To fully terminate your Constellation simply run constellation terminate again.
We are currently investigating this issue.

Upgrading via terraform provider fails if microservice version was unset on installation

Issue description

When installing constellation using the new terraform provider <2.15.0 it was possible to leave the constellation_microservice_version unset using a default value matching the provider version. This was changed in #2791. When attempting to upgrade to v2.15.0 the terraform provider will fail with the error message Parsing microservice version: invalid semver: v because it is trying to parse the null value from TF state auto-prefixed with v by the internal/semver.New.

Steps to reproduce the behavior

  1. Existing cluster with unset constellation_microservice_version in TF state.
  2. Upgrade to 2.15.0
  3. Will fail withParsing microservice version: invalid semver: v
  4. Manually insert the current microservice version into the TF state
  5. Upgrade will succeed

Version

v2.15.0

Constellation Config

No response

Support confidential storage on aws

Use case

According to the doc, confidential storage is not yet supported on AWS. This is a show-stopper for production usage.

Describe your solution

Please implement confidential storage feature for clusters deployed on aws

Additional context

Nothing more to mention!

local installation cannot find the qemu image to download

Issue description

The needed qemu image cannot be downloaded.

To reproduce

Steps to reproduce the behavior:

  1. use the local guide
  2. Start the installation
    constellation mini up 
    Downloading image to ./constellation.raw
    Error: preparing config: downloading image to ./constellation.raw: downloading image: 404 Not Found

Environment

  • constellation version:
    constellation version
    Version:        2.2.1
    GitCommit:      15b612b4cbf6c92a889a55b995de56947ff321a9
    GitTreeState:   clean
    BuildDate:      2022-11-14T16:32:39Z
    GoVersion:      go1.19.3
    Compiler:       gc
    Platform:       linux/amd64

Expected behavior

the local installation should be usable

Additional info / screenshot

constellation create fails in local setup

Issue description

I am following first steps (local) on AMD EPYC 7313 with SNP enabled kernel, but constellation create fails.

I followed the steps for qemu.

To reproduce

Steps to reproduce the behavior:

  1. Follow the instructions

Environment

  • constellation version: v2.11
  • constellation-conf.yaml: created from constellation config generate qemu
  • VM type used to run Constellation: qemu

Expected behavior

As shown in the tutorial page.

Additional info / screenshot

Terminal prints

Error: couldn't retrieve IP address of domain id: 6be4bc53-c51c-4313-8c5e-2b68e19207a2. Please check following:
1) is the domain running proplerly?
2) has the network interface an IP address?
3) Networking issues on your libvirt setup?
 4) is DHCP enabled on this Domain's network?
5) if you use bridge network, the domain should have the pkg qemu-agent installed
IMPORTANT: This error is not a terraform libvirt-provider error, but an error caused by your KVM/libvirt infrastructure configuration/setup
 timeout while waiting for state to become 'all-addresses-obtained' (last state: 'waiting-addresses', timeout: 5m0s)

  with module.node_group["worker_default"].libvirt_domain.instance_group[0],
  on modules/instance_group/main.tf line 13, in resource "libvirt_domain" "instance_group":
  13: resource "libvirt_domain" "instance_group" {

File in docker container constell-libvirt:/var/log/libvirt/libvirtd.log shows

2023-10-04 11:28:08.189+0000: 41: info : libvirt version: 8.10.0, package: 2.fc38 (Fedora Project, 2023-01-03-08:31:39, )
2023-10-04 11:28:08.189+0000: 41: info : hostname: jax
2023-10-04 11:28:08.189+0000: 41: error : virGDBusGetSystemBus:99 : internal error: Unable to get system bus connection: Could not connect: No such file or directory
2023-10-04 11:28:08.189+0000: 41: warning : networkStateInitialize:658 : DBus not available, disabling firewalld support in bridge_network_driver: internal error: Unable to get system bus connection: Could not connect: No such file or directory
2023-10-04 11:28:08.384+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on 'usb3'
2023-10-04 11:28:08.384+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on 'usb4'
2023-10-04 11:28:08.387+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on 'usb5'
2023-10-04 11:28:08.388+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on '5-1'
2023-10-04 11:28:08.388+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on '5-1.3'
2023-10-04 11:28:08.388+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on '5-1.4'
2023-10-04 11:28:08.389+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on '5-2'
2023-10-04 11:28:08.389+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on '5-2.1'
2023-10-04 11:28:08.390+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on '5-2.2'
2023-10-04 11:28:08.391+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on 'usb6'
2023-10-04 11:28:08.397+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on 'usb1'
2023-10-04 11:28:08.397+0000: 49: error : udevGetUintProperty:214 : internal error: Missing udev property 'ID_VENDOR_ID' on 'usb2'
2023-10-04 11:28:28.224+0000: 26: error : virCgroupDetectControllers:451 : At least one cgroup controller is required: No such device or address
2023-10-04 11:28:28.390+0000: 27: error : virCgroupDetectControllers:451 : At least one cgroup controller is required: No such device or address
2023-10-04 11:28:28.526+0000: 29: error : virCgroupDetectControllers:451 : At least one cgroup controller is required: No such device or address
2023-10-04 11:28:28.529+0000: 30: error : virCgroupDetectControllers:451 : At least one cgroup controller is required: No such device or address
2023-10-04 11:33:31.032+0000: 25: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-10-04 11:33:31.519+0000: 25: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error

AWS: `constellation terminate` fails with `DependencyViolation: The vpc '...' has dependencies and cannot be deleted.` when deploying LB service

Issue description

When a K8s service of type LoadBalancer is deployed, the cluster cannot terminate cleanly.
When running constellation terminate, termination fails with an error like:

Error: deleting EC2 VPC (vpc-...): DependencyViolation: The vpc 'vpc-...' has dependencies and cannot be deleted.
	status code: 400, request id: ...

This is due to buggy cleanup code in the currently used AWS Operator. A fix is in progress and will be part of an upcoming release.

Workaround

Delete the load balancer of each service and their security groups manually before calling constellation terminate. You might also delete the VPC manually instead of the security groups. If you already ran terminate, you may delete the Elastic IP resources that are attached your Constellation cluster. You can correlate the IPs by either looking at the public IP contained in your constellation-id.yml or by looking at the Elastic IP ressources conntected to your offending VPC.

Expected behavior

constellation terminate finishes without errors.

Some questiones about the description "When a Constellation node image boots inside a CVM, ..."

Issue description

Some questiones about the description "When a Constellation node image boots inside a CVM, ..." on https://docs.edgeless.systems/constellation/next/architecture/attestation#node-attestation:

1、How to boot a Constellation node image in a CVM for Constellation ?

2、Is the launched Constellation node image treated as a Constellation node or is the CVM treated as a Constellation node?

3、Is Constellation running multiple k8s pods in the Constellation node?

Steps to reproduce the behavior

No response

Version

No response

Constellation Config

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.