Giter Club home page Giter Club logo

guest-components's Introduction

logo

Confidential Containers

CII Best Practices

Welcome to confidential-containers

Confidential Containers is an open source community working to leverage Trusted Execution Environments to protect containers and data and to deliver cloud native confidential computing.

We have a new release every 6 weeks! See Release Notes or Quickstart Guide

Our key considerations are:

  • Allow cloud native application owners to enforce application security requirements
  • Transparent deployment of unmodified containers
  • Support for multiple TEE and hardware platforms
  • A trust model which separates Cloud Service Providers (CSPs) from guest applications
  • Least privilege principles for the Kubernetes cluster administration capabilities which impact delivering Confidential Computing for guest applications or data inside the TEE

Get started quickly...

Further Detail

asciicast FOSSA Status

Contribute...

License

FOSSA Status

guest-components's People

Contributors

1570005763 avatar arronwy avatar baoshunfang avatar bbolroc avatar chengyuzhu6 avatar dcmiddle avatar dependabot[bot] avatar fidencio avatar fitzthum avatar haokunx-intel avatar huoqifeng avatar jakob-naucke avatar jialez0 avatar jiangliu avatar jodh-intel avatar katexochen avatar lindayu17 avatar lumjjb avatar mattarnoatibm avatar mkulke avatar mythi avatar portersrc avatar pravinrajr9 avatar sameo avatar stevenhorsman avatar surajssd avatar vbatts avatar wainersm avatar wobito avatar xynnn007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

guest-components's Issues

Missing basic CI

The attestation-agent repo is missing basic build and unit tests CI.
This is typically done through github actions.

[RFC] ocicrypt rustify plan

As we synced in last ocicrypt rustify architecture review meeting, @lumjjb also shared the following suggested implementation plan:

image

In order to accelerate the process and different teams can cooperation in parallel. I seperate the work with different module in seperate steps:

Step1: Implement base struct and create config API

  • spec
  • config
  • helpers

Step2: Implement keywrapper for asymmetric key and utils for key parse and verify

  • JWE
  • PGP
  • PKCS7
  • PKCS11
  • utils

Step3: Implement blockcipher and keyprovider for symmetric key encryption/decryption and retriverment

  • blockcipher
  • keyprovider

Step4: Implement EncryptLayer/DecryptLayer API and integration tests

  • EncryptLayer/DecryptLayer API
  • integration tests
  • CI/CD
    image

Step5: Integration with Kata Agent

According to Confidential Containers V0 Plan: https://docs.google.com/spreadsheets/d/1M_MijAutym4hMg8KtIye1jIDAUMUWsFCri9nq4dqGvA/edit#gid=0 , we will support image offload in guest. The initial implementation will be skopeo + ocicrypt + umoci to download/decrypt/upack. The next step can be rustify the process, using rust oci crate to download container image, ocicrypt-rs to decrypt and umoci to unpack. Finally we may implement our own oci registry client and image service in kata agent:
cc

[Security Risk] Enable of security validation mechanism.

At present, the security validation function is enabled through the Config of image-rs, but we must consider the risk in actual use: there may be malicious boot image providers, which disable the security validation function by providing wrong image-rs configuration.

Therefore, do we need to consider forcibly enabling security validation in the code of image-rs?

Cut a 0.0.1 release of ocicrypt-rs

ocicrypt-rs is currently being used as part of the image-rs. However, we're importing main there, and this is very much error prone and not suitable for the moment when we start working on reproducible builds (a hard requirement for Confidential Containers).

With this in mind, please, let's cut a 0.0.1 release with a commit that's known to be working, and we can use that as part of image-rs.

Image-rs link: https://github.com/confidential-containers/image-rs/blob/00ea90d5151025b02f84350d2b31394636cb4ede/Cargo.toml#L19

ocicrypt fails to compile on ARM64

root@a2c83097673b:/ocicrypt# uname -a
Linux a2c83097673b 5.10.76-linuxkit #182 SMP PREEMPT Mon Nov 8 11:22:26 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

root@a2c83097673b:/ocicrypt# cargo build
Compiling prost-build v0.10.1
error: failed to run custom build command for prost-build v0.10.1

Caused by:
process didn't exit successfully: /ocicrypt/target/debug/build/prost-build-9cd0a0daaa3cb29e/build-script-build (exit status: 101)
--- stdout
cargo:rerun-if-changed=/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/cmake
CMAKE_TOOLCHAIN_FILE_aarch64-unknown-linux-gnu = None
CMAKE_TOOLCHAIN_FILE_aarch64_unknown_linux_gnu = None
HOST_CMAKE_TOOLCHAIN_FILE = None
CMAKE_TOOLCHAIN_FILE = None
CMAKE_GENERATOR_aarch64-unknown-linux-gnu = None
CMAKE_GENERATOR_aarch64_unknown_linux_gnu = None
HOST_CMAKE_GENERATOR = None
CMAKE_GENERATOR = None
CMAKE_PREFIX_PATH_aarch64-unknown-linux-gnu = None
CMAKE_PREFIX_PATH_aarch64_unknown_linux_gnu = None
HOST_CMAKE_PREFIX_PATH = None
CMAKE_PREFIX_PATH = None
CMAKE_aarch64-unknown-linux-gnu = None
CMAKE_aarch64_unknown_linux_gnu = None
HOST_CMAKE = None
CMAKE = None
running: "cmake" "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/cmake" "-DCMAKE_INSTALL_PREFIX=/ocicrypt/target/debug/build/prost-build-e69ac5d5b6d3aece/out" "-DCMAKE_C_FLAGS= -ffunction-sections -fdata-sections -fPIC" "-DCMAKE_C_COMPILER=/usr/bin/cc" "-DCMAKE_CXX_FLAGS= -ffunction-sections -fdata-sections -fPIC" "-DCMAKE_CXX_COMPILER=/usr/bin/c++" "-DCMAKE_ASM_FLAGS= -ffunction-sections -fdata-sections -fPIC" "-DCMAKE_ASM_COMPILER=/usr/bin/cc" "-DCMAKE_BUILD_TYPE=Debug"

-- 3.19.4.0
-- Configuring done
-- Generating done
-- Build files have been written to: /ocicrypt/target/debug/build/prost-build-e69ac5d5b6d3aece/out/build
running: "cmake" "--build" "." "--target" "install" "--config" "Debug" "--parallel" "4"
[ 0%] Built target gmock
[ 8%] Built target libprotobuf-lite
[ 28%] Built target libprotobuf
[ 29%] Built target gmock_main
[ 50%] Built target libprotoc
[ 50%] Built target protoc
[ 51%] Built target test_plugin
[ 57%] Built target lite-arena-test
[ 57%] Built target lite-test
[ 57%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/field_comparator_test.cc.o
[ 57%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc.o
[ 57%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/internal/default_value_objectwriter_test.cc.o
[ 57%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/field_mask_util_test.cc.o
[ 58%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/internal/json_objectwriter_test.cc.o
[ 58%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/internal/json_stream_parser_test.cc.o
[ 58%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/internal/protostream_objectsource_test.cc.o
[ 58%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/internal/protostream_objectwriter_test.cc.o
[ 59%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/internal/type_info_test_helper.cc.o
[ 59%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/json_util_test.cc.o
[ 59%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/message_differencer_unittest.cc.o
[ 59%] Building CXX object CMakeFiles/tests.dir/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/util/time_util_test.cc.o

--- stderr
make: warning: -j4 forced in submake: resetting jobserver mode.
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:46:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.inc: In member function 'virtual void google::protobuf::internal::{anonymous}::MapFieldReflectionTest_RegularFields_Test::TestBody()':
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.inc:1280:69: warning: 'const google::protobuf::RepeatedPtrField& google::protobuf::Reflection::GetRepeatedPtrField(const google::protobuf::Message&, const google::protobuf::FieldDescriptor*) const [with T = google::protobuf::Message]' is deprecated: Please use GetRepeatedFieldRef() instead [-Wdeprecated-declarations]
1280 | refl->GetRepeatedPtrField(message, fd_map_int32_int32);
| ^
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_proto2_unittest.pb.h:30,
from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:31:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/message.h:867:30: note: declared here
867 | const RepeatedPtrField& GetRepeatedPtrField(
| ^~~~~~~~~~~~~~~~~~~
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:46:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.inc:1282:70: warning: 'const google::protobuf::RepeatedPtrField& google::protobuf::Reflection::GetRepeatedPtrField(const google::protobuf::Message&, const google::protobuf::FieldDescriptor*) const [with T = google::protobuf::Message]' is deprecated: Please use GetRepeatedFieldRef() instead [-Wdeprecated-declarations]
1282 | refl->GetRepeatedPtrField(message, fd_map_int32_double);
| ^
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_proto2_unittest.pb.h:30,
from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:31:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/message.h:867:30: note: declared here
867 | const RepeatedPtrField& GetRepeatedPtrField(
| ^~~~~~~~~~~~~~~~~~~
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:46:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.inc:1284:71: warning: 'const google::protobuf::RepeatedPtrField& google::protobuf::Reflection::GetRepeatedPtrField(const google::protobuf::Message&, const google::protobuf::FieldDescriptor*) const [with T = google::protobuf::Message]' is deprecated: Please use GetRepeatedFieldRef() instead [-Wdeprecated-declarations]
1284 | refl->GetRepeatedPtrField(message, fd_map_string_string);
| ^
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_proto2_unittest.pb.h:30,
from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:31:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/message.h:867:30: note: declared here
867 | const RepeatedPtrField& GetRepeatedPtrField(
| ^~~~~~~~~~~~~~~~~~~
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:46:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.inc:1286:79: warning: 'const google::protobuf::RepeatedPtrField& google::protobuf::Reflection::GetRepeatedPtrField(const google::protobuf::Message&, const google::protobuf::FieldDescriptor*) const [with T = google::protobuf::Message]' is deprecated: Please use GetRepeatedFieldRef() instead [-Wdeprecated-declarations]
1286 | refl->GetRepeatedPtrField(message, fd_map_int32_foreign_message);
| ^
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_proto2_unittest.pb.h:30,
from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:31:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/message.h:867:30: note: declared here
867 | const RepeatedPtrField& GetRepeatedPtrField(
| ^~~~~~~~~~~~~~~~~~~
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:46:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.inc:1290:74: warning: 'google::protobuf::RepeatedPtrField* google::protobuf::Reflection::MutableRepeatedPtrField(google::protobuf::Message*, const google::protobuf::FieldDescriptor*) const [with T = google::protobuf::Message]' is deprecated: Please use GetMutableRepeatedFieldRef() instead [-Wdeprecated-declarations]
1290 | refl->MutableRepeatedPtrField(&message, fd_map_int32_int32);
| ^
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_proto2_unittest.pb.h:30,
from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:31:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/message.h:878:24: note: declared here
878 | RepeatedPtrField* MutableRepeatedPtrField(Message* msg,
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.cc:46:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.1/third-party/protobuf/src/google/protobuf/map_test.inc:1292:75: warning: 'google::protobuf::RepeatedPtrField* google::protobuf::Reflection::MutableRepeatedPtrField(google::protobuf::Message*, const google::protobuf::FieldDescriptor*) const [with T = google::protobuf::Message]' is deprecated: Please use GetMutableRepeatedFieldRef() instead [-Wdeprecated-declarations]

2141 | inline ::protobuf_unittest::TestField* TestDiffMessage::add_rm() {
| ^~~~~~~~~~~~~~~
make[1]: *** [CMakeFiles/Makefile2:210: CMakeFiles/tests.dir/all] Error 2
make: *** [Makefile:130: all] Error 2
thread 'main' panicked at '
command did not execute successfully, got: exit status: 2

build script failed, must exit now', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/cmake-0.1.48/src/lib.rs:975:5
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

[RFC] CC-KBC proposal

We need a KBC that matches the Confidential Containers KBS and the Attestation-Service. This KBC will be maintained as the standard KBC of the Confidential Containers community.

Requirements

  1. Comply with the existing KBC code framework of Attestation-Agent.
  2. Implement the API set of the current Attestation-Agent, including requesting a key or decrypting content from KBS, and requesting to download confidential resources from KBS. Connecting and communicating with KBS with the attestation protocol defined by KBS.
  3. Support multi platform HW-TEE evidence acquisition
  4. Support the evidence message format defined in the Attestation-Service, and be able to obtain or generate the content of each field therein.

KBS attestation protocol

The protocol used by CC-KBC to request KBS follows the requirements and descriptions in the KBS attestation protocol document, uses HTTPS protocol to request KBS's restful API, and then executes the "Request-Challenge-Attestation-Response" process defined in the KBS attestation protocol document in the HTTPS data payload.

Please refer to the KBS attestation protocol document for the specific message format content and more detailed description information of this part.

Evicence message

According to the requirements of the "Request-Challenge-Attestation-Response" process defined in the KBS attestation protocol document, CC-KBC needs to collect the platform attestation evidence after receiving the Challenge, then send it to KBS, and KBS forwards it to the Attestation-Service for verification. The payload message format is as follows:

{
    "evidence": {
		"nonce": "",
		"tee": "",
	    "tee-pubkey": {
		    "algorithm": "",
		    "pubkey-length": "",
		    "pubkey": ""
		},
	    "tee-evidence": {}
	}
}

The tee-evidence field in the message is specific to the HW-TEE platform and contains Quote signed by HW-TEE and some special evidence materials. The tee-evidence content and format of different HW-TEE platforms will be defined by the Attestation-Service.

HW-TEE evidence

CC-KBC cannot be platform specific. It should have the ability to support multiple HW-TEE platforms. This requires CC-KBC to collect Quote signed by HW-TEE and some special evidence materials (hereinafter referred to as attestation evidence) on different HW-TEE hardware platforms. At present, we has two options to support multi platform evidence acquisition:

Option 1

CC-KBC develops driver modules that support different HW-TEE evidence acquisition, and defines a unified trait to call:

pub trait Attester {
  fn dump_tee_evidence(custom_data: Vec<u8>) -> Result<Box<dyn TeeEvidence>>
}

Where parameter custom_data is additional data that needs to be embedded in the quote and protected by HW-TEE signature. It is usually the hash of nonce and TEE public key in the challenge: Hash(nonce || pubkey). In the future, there may be more information that needs to be protected by HW-TEE signature, so it is extended in the following form: Hash(nonce || pubkey || other_info). For the meaning and use of nonce and TEE public key, please refer to KBS attestation protocol document.

The return value is of type Box<dyn TeeEvidence>. As mentioned above, different HW-TEE will have different tee-evidence structures defined by Attestation-Service. Therefore, the trait object is used as the return value here. This return value will be used as the content of the tee-evidence field in the Evidence message sent to KBS.

Option 2

CC-KBC calls the API provided by the librats tool, directly obtains the quote of the corresponding HW-TEE platform, and then packages it in the KBS protocol's Evidence message format.

@YangLiang3 may give more details about the librats tool.

The advantage of this scheme is that CC-KBC no longer needs to implement HW-TEE specific code, because librats itself supports multiple HW-TEE platforms, and CC-KBC can only focus on the general functional application level. However, we may need to install .so dependencies of librats in the running environment of AA in advance.

[Signature] Add more Error Infomation when read files failed.

At present, in the signature module, when reading various files (such as sigstore configuration file, signature file, etc.), if the reading fails, the caller cannot know which file has the problem. We should add error return prompt in all operations of reading files.

[RFC] AA support for signed image verification

As @stevenhorsman mentioned in issue#2682 of kata-containers, the function of verifying the container image signature will be added in the PoC of ccv0. It is also mentioned in the proposal that in the long term approach, the attestation-agent (AA for short) will cooperate to complete the verification of the container image signature based on the attestation. This proposal will give specific schemes to support verifying container image signature in AA.

Summary

Based on the red hat simple signing architecture, three contents are required to successfully verify the container image signature: signature, public key and policy.json file. Therefore, how to obtain these three contents has become the key for the confidential containers to support the verification of container image signature. In addition, the usage scenario of the confidential containers requires that the container image must be encrypted (paritial layers containing sensitive info) and signed. Therefore, this scheme needs to organically combine the two mechanisms, rather than treat them as independent and irrelevant features.

We currently envision two options:

Option 1:Boot Image scheme

In this scheme, the signature (or signature store server), public key (or public key server) and policy.json required for signature verification need to be directly built into the boot image by the tenant when building it.

In fact, in this scheme, the process of verifying the signature does not require the participation of AA, ocicrypt-rs can complete all signature verification as long as it supports simple signing. We propose this option only to facilitate comparison with the option 2 scheme introduced later.

Since AA is not required to participate, this scheme is almost indistinguishable from the original red hat simple signing architecture. However, must pay attention to the following two points:

  1. To ensure the security of signature, public key and policy.json content, the boot image must be provided by the tenant. In the process of implementing the commercialization of confidential containers, it is optional whether to require tenants to provide boot images, but the scheme of "must provide boot images by tenants" will undoubtedly increase the use and maintenance costs of tenants and reduce the flexibility of the scheme.

  2. Although the signature store server and public key server can be configured to obtain from the remote server, this adds additional trustors other than the trusted container registry. In fact, when practicing confidential computing, it is a best practice to reduce the number of trustors as much as possible.

Option 2:Key Broker Service scheme

In order to enhance security, it is not enough to simply configure the signature store server and public key server into the trusted domain controlled by the tenant, because these two communication channels are not attested. At present, the communication between KBS and AA is attested.

In this scheme, the tenant needs to deploy the key broker service (KBS for short) in the trusted domain under his control, and save the signature, public key and policy.json on the KBS side (we still use the word KBS to facilitate description and establish association with the KBS used in the encryption container image scheme). This scheme requires some cooperation from AA, ocicrypt-rs and KBS. We describe the whole process as follows:

  1. KBS side:
    • Manage signing key and assign unique key-id.
    • Formulate the policy.json file and save it in the specified local directory.
    • Configure the yaml file in the /etc/containers/registries.d local directory and specify the signature store.
  2. Signing: sign the container image and save the signature in the specified signature store.
  3. Startup of AA: in the startup phase of AA, connect to KBS through the key broker client (KBC for short) and download the policy.json file through the attested channel.
  4. Image pulling: the kata-agent in pod pulls the image.
  5. Signature verification:
    • skopeo/image mgmt service calls ocicrypt-rs, and then ocicrypt-rs calls AA to send the manifest digest and docker reference to AA.
    • AA uses the informations such as manifest digest and docker reference to obtain the signature and the public key used to verify the signature from KBS and returns them to ocicrypt-rs.

Under this scheme, a best practice on the KBS side is to uniformly manage the signature key and decryption key, and establish a mapping relationship with docker reference/manifest digest. In this way, docker reference/manifest digest becomes the actual "key ID". If this practice is not adopted, KBS needs to separately consider where to place the key ID so that it can be read by AA and sent to itself along with the manifest digest and docker reference.

Since only option 2 requires AA's participation and cooperation, the following text only gives the requirements and design for option 2.

Requirements

KBS scheme requires the cooperation of KBS, ocicrypt-rs and AA. Therefore, the following requirements are proposed for these three components:

ocicrypt-rs

  • Extend keyprovider protocol. When pulling the container image, call the new grpc service API provided by AA to obtain the signature and public key.

  • Use the public key to verify the signature.

  • Can read the policy.json file and decide whether to accept the container image.

FYI: @arronwy

Attestation Agent

  • Implement the grpc service API newly added in the keyprovider protocol to provide signature and public key for ocicrypt-rs.

  • Expand KBC interface trait, add a new KBC interface, and call it when initializing KBC in AA startup phase. Its function is: KBC starts attestation to KBS, negotiates a key to establish encryption channel after attestation, and then downloads policy.json from KBS.

  • Advance the instantiation of KBC module to AA startup phase (the function of instantiating KBC when receiving UnWrapKey request for the first time is reserved), which requires AA to be able to read the configuration files of KBC name and KBS URI from the environment during startup phase.

Key Broker Service

  • Provide the management function of signing public key and allocate unique key-ID.

  • Provide the management function and download interface of policy.json file (download is allowed only after KBC attestation and establishes an encrypted channel).

  • Can read and parse the yaml file in the /etc/containers/registries.d directory. The format and content of the file are defined by the red hat simple signing scheme.

  • Can get the signature from the specified address.

  • Provide a unified and standardized service interface to enable AA to connect it and obtain signature and public key.

If the option2 scheme is finally adopted, we will issue a new proposal for the detailed definitions and standards of the above functions of KBS in the near future, and provide code libs that implement these new functions of KBS in sample KBS of AA's repository in the future(or exists as a separate library project under the CC organization).

Design (Attestation Agent)

We now give the design of the attachment agent supporting container image signature option2 scheme: AA will add a new KBC interface to download the policy.json file, and a new keyprovider grpc API to provide signature and public key to ocicrypt-rs.

KBC Interface

We will add a new interface in KBC interface trait, its name is get_sign_policy_file:

pub trait KbcInterface {
    ...
    fn get_sign_policy_file(&mut self) -> Result<std::fs::File> {
        ...(default implement)
    };
}

This trait will be called immediately after KBC initialization is completed in the startup phase of AA. The functions it realized are: connect to KBS, start an attestation, negotiate the key after attestation, establish an encryption channel, then download the policy.json file from KBS, and then return the file descriptor to the caller.

In order to maintain compatibility with previous KBCs and pre-attestation KBCs, this trait will provide a default implementation (generate the simplest policy.json) to ensure that KBCs that do not implement it can still run normally.

KeyProvider API

We will extend the keyprovider protocol and add a new grpc API:

syntax = "proto3";

package keyprovider;

...
message keyProviderVerifySignatureInput {
    bytes KeyProviderVerifySignatureInput = 1;
}

message keyProviderVerifySignatureOutput {
    bytes KeyProviderVerifySignatureOutput = 1;
}

service KeyProviderService {
    ...
    rpc VerifySignature(keyProviderVerifySignatureInput) returns (keyProviderVerifySignatureOutput) {};
}

The input and output messages are JSON strings. The format definition is given below:

KeyProviderVerifySignatureInput:

{
    "image": {
        "manifest-digest": "sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e"
    },
    "identity": {
        "docker-reference": "docker.io/library/busybox:latest"
    }
}

"manifest-digest" represents the digest of the image manifest blob of the signed container image, but in the specific implementation, please pay attention to:

  • The image manifest of a single container image may contain the digest value of multiple image manifest using different digest algorithms.

  • If the manifest contained in the container image is signed manifest defined by docker(including image manifest and additional manifest signatures), then this manifest digest field represents the "JSON web signature" part of the manifest signature in the signed manifest, rather than the digest of the image manifest blob in the container image.

"docker-reference" is a way to describe container image. In the currently used red hat simple signaling scheme, only this way is supported.

KeyProviderVerifySignatureOutput:

{
    "algorithm": "",
    "key-length": "",
    "public-key": "<pub-key>",
    "signature": "<signature-content>"
}

"algorithm" and "key-length" field indicates the public key length and signature algorithm used to verify the signature. The content of the "public key" field contains the public key required to verify the signature.

"signature" is the signature of the container image, which is embodied as a character stream here. Please refer to here for its specific content format.

Image signing verfication & KBS integration Proposal

Overview

According to CCv1 Image Security Design design, image-rs relies on both policy.json and sigstore files when it executes container image signature's verification.

  • policy.json's signedBy includes all the public keys that used to verify the image signature.
{
    "type":    "signedBy",
    "keyType": "GPGKeys", /* The only currently supported value */
    "keyData": "base64-encoded-keyring-data",
    "signedIdentity": identity_requirement
}
  • sigstore is the configuration file used to obtain the image's signature file.
default:
    sigstore: file:///var/lib/containers/sigstore
docker:
    docker.io/my_private_registry:
        sigstore: https://my-sigstore.com/example/sigstore
    registry.access.redhat.com:
        sigstore: https://access.redhat.com/webassets/docker/content/sigstore

If the image-rs finds that the policy.json or sigstore files don't exist locally during signature's verification, it will invoke Attestation-Agent's Get-Resource service interface to fetch them. The format of getResourceRequest that AA receives is:

message getResourceRequest {
    string KbcName = 1;
    string KbsUri = 2;
    string ResourceDescription = 3;
}

The ResourceDescription is a JSON used to describe the resource, it's definition as following:

{
    "name":"resource_name",
    "optional":{}
}

The ResourceDescription.name specifies the resource name to fetch (policy.json or sigstore).

Prerequisite

  • The KBS must be a trusted service that has been verified by AA.

Proposal

It mainly discusses how to define the resource group's identity in ResourceDescription.optional and how to pass it to Sandbox.

Option 1

One KBS only supports one set of policy.json and sigstore files, that is, when KBS receives a getResourceRequest request, it will always returns the same policy.json or sigstore file. The getResourceRequest.ResourceDescription definition will be:

  • Fetch policy.json file:
{
    "name":"image_signing_verfication_policy",
    "optional":{}
}
  • Fetch sigstore file:
{
    "name":"image_signing_verfication_sigstore",
    "optional":{}
}

Advantage

Easy to deploy:

  • AA side: As long as AA send a unified getResourceRequest request to KBS, it can get the required file.
  • KBS side: Just add a single set of policy.json and sigstore files to the KBS

Weakness

Different sandboxes will use the same set of policy.json and sigstore files for signature verification, so it cannot implement differentiated deployment of policies between different sandboxes.

Option 2

One KBS supports multiple sets of policy.json and sigstore files. The resource_id needs to be specified in ResourceDescription.optional when AA sends getResourceRequest request to KBS.
The ResourceDescription.optional definition will be:

{
    "name":"resource_name",
    "optional":
    {
        "resource_id" : "xxxxxxxx"
    }
}
  • resource_id: defined by ResourceId(defined in ResourceId section)

Advantage

Differentiated deployment strategies can be implemented for different Sandboxes.

Weakness

The ResourceId(defined in ResourceId section) needs to be passed into sandbox during it's deployment. It will add deployment complexity for Sandbox.

ResourceId

The resourceId is a set of policy.json and sigstore files identity inside KBS. It's generated by KBS and should be passed to Kata sandbox during deployment.

Generation

KBS will return a uuid when a set of policy.json and sigstore files is added into KBS. The uuid equals to resourceId.

Delivery

The resourceId can be passed to the Kata Sandbox via the agent config file (agent-config.toml). So the new image_signing_verfication_resource_id needs to be added:

aa_kbc_params = "eaa_kbc::123.56.152.133:30000"
image_signing_verfication_resource_id = "xxxxxxxx"
debug_console = true
dev_mode = false
log_level = "debug"
#hotplug_timeout = 3
debug_console_vport = 1026
log_vport = 0
container_pipe_size = 0
server_addr = "vsock://-1:1024"
unified_cgroup_hierarchy = false
tracing = false

[endpoints]
allowed = [
        "AddARPNeighborsRequest",
        "AddSwapRequest",
        "CloseStdinRequest",
        "CopyFileRequest",
        "CreateContainerRequest",
        "CreateSandboxRequest",
        "DestroySandboxRequest",
        # "ExecProcessRequest",
        "GetMetricsRequest",
        "GetOOMEventRequest",
        "GuestDetailsRequest",
        "ListInterfacesRequest",
        "ListRoutesRequest",
        "MemHotplugByProbeRequest",
        "OnlineCPUMemRequest",
        "PauseContainerRequest",
        "PullImageRequest",
        "ReadStreamRequest",
        "RemoveContainerRequest",
        # "ReseedRandomDevRequest",
        "ResumeContainerRequest",
        "SetGuestDateTimeRequest",
        "SignalProcessRequest",
        "StartContainerRequest",
        "StartTracingRequest",
        "StatsContainerRequest",
        "StopTracingRequest",
        "TtyWinResizeRequest",
        "UpdateContainerRequest",
        "UpdateInterfaceRequest",
        "UpdateRoutesRequest",
        "WaitProcessRequest",
        "WriteStreamRequest"
        ]

Question: How to pass the parameter dynamically in a more flexible way than through agent config?

Failed to build with latest tonic v0.8.0

   Compiling tonic v0.8.0
error[E0277]: the trait bound `utils::keyprovider::KeyProviderKeyWrapProtocolInput: prost::message::Message` is not satisfied
   --> src/utils/keyprovider.rs:100:60
    |
100 |             self.inner.unary(request.into_request(), path, codec).await
    |                        -----                               ^^^^^ the trait `prost::message::Message` is not implemented for `utils::keyprovider::KeyProviderKeyWrapProtocolInput`
    |                        |
    |                        required by a bound introduced by this call
    |
    = note: required because of the requirements on the impl of `Codec` for `ProstCodec<utils::keyprovider::KeyProviderKeyWrapProtocolInput, _>`
note: required by a bound in `tonic::client::Grpc::<T>::unary`
   --> /home/arron/.cargo/registry/src/github.com-1ecc6299db9ec823/tonic-0.8.0/src/client/grpc.rs:149:12
    |
149 |         C: Codec<Encode = M1, Decode = M2>,
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `tonic::client::Grpc::<T>::unary`

error[E0277]: the trait bound `utils::keyprovider::KeyProviderKeyWrapProtocolOutput: prost::message::Message` is not satisfied

The build with works with:

tonic = { version = "0.7.2", optional = true }

@pravinrajr9 Could you help to fix this issue?

Failed to build using Rust 1.58.1 on Fedora 33

For the first time I tried to build the attestation-agent in my Fedora 33 x86_64 workstation but it failed.

I'm using Rust 1.58.1. Is there a minimum version required?

Based on the build logs below, it seems to be missing some Perl library on my system. Any clue about what is wrong?

[wmoschet@wainer-laptop attestation-agent]$ make
cargo build --release   
   Compiling libc v0.2.114
   Compiling proc-macro2 v1.0.36
   Compiling unicode-xid v0.2.2
   Compiling syn v1.0.86
   Compiling autocfg v1.0.1
   Compiling cfg-if v1.0.0
   Compiling log v0.4.14
   Compiling memchr v2.4.1
   Compiling pin-project-lite v0.2.8
   Compiling pkg-config v0.3.24
   Compiling futures-core v0.3.19
   Compiling either v1.6.1
   Compiling anyhow v1.0.53
   Compiling parking_lot_core v0.8.5
   Compiling bytes v1.1.0
   Compiling typenum v1.15.0
   Compiling smallvec v1.8.0
   Compiling scopeguard v1.1.0
   Compiling version_check v0.9.4
   Compiling futures-task v0.3.19
   Compiling futures-channel v0.3.19
   Compiling once_cell v1.9.0
   Compiling futures-sink v0.3.19
   Compiling futures-util v0.3.19
   Compiling tinyvec_macros v0.1.0
   Compiling lazy_static v1.4.0
   Compiling futures-io v0.3.19
   Compiling slab v0.4.5
   Compiling pin-utils v0.1.0
   Compiling matches v0.1.9
   Compiling itoa v1.0.1
   Compiling fnv v1.0.7
   Compiling unicode-bidi v0.3.7
   Compiling hashbrown v0.11.2
   Compiling percent-encoding v2.1.0
   Compiling unicode-segmentation v1.8.0
   Compiling httparse v1.5.1
   Compiling remove_dir_all v0.5.3
   Compiling fastrand v1.7.0
   Compiling fixedbitset v0.2.0
   Compiling cpufeatures v0.2.1
   Compiling subtle v2.4.1
   Compiling bitflags v1.3.2
   Compiling multimap v0.8.3
   Compiling serde_derive v1.0.135
   Compiling try-lock v0.2.3
   Compiling opaque-debug v0.3.0
   Compiling ppv-lite86 v0.2.16
   Compiling tower-service v0.3.1
   Compiling httpdate v1.0.2
   Compiling itoa v0.4.8
   Compiling async-trait v0.1.52
   Compiling serde v1.0.135
   Compiling serde_json v1.0.78
   Compiling tower-layer v0.3.1
   Compiling unicode-width v0.1.9
   Compiling foreign-types-shared v0.1.1
   Compiling openssl v0.10.38
   Compiling regex-syntax v0.6.25
   Compiling termcolor v1.1.2
   Compiling base64 v0.13.0
   Compiling ryu v1.0.9
   Compiling ansi_term v0.12.1
   Compiling vec_map v0.8.2
   Compiling humantime v2.1.0
   Compiling strsim v0.8.0
   Compiling foreign-types-shared v0.3.0
   Compiling string-error v0.1.0
   Compiling instant v0.1.12
   Compiling itertools v0.10.3
   Compiling lock_api v0.4.5
   Compiling indexmap v1.8.0
   Compiling num-traits v0.2.14
   Compiling num-integer v0.1.44
   Compiling tinyvec v1.5.1
   Compiling tracing-core v0.1.21
   Compiling generic-array v0.14.5
   Compiling http v0.2.6
   Compiling form_urlencoded v1.0.1
   Compiling heck v0.3.3
   Compiling foreign-types v0.3.2
   Compiling textwrap v0.11.0
   Compiling want v0.3.0
   Compiling aho-corasick v0.7.18
   Compiling quote v1.0.15
   Compiling unicode-normalization v0.1.19
   Compiling jobserver v0.1.24
   Compiling which v4.2.4
   Compiling time v0.1.43
   Compiling tempfile v3.3.0
   Compiling signal-hook-registry v1.4.0
   Compiling num_cpus v1.13.1
   Compiling mio v0.7.14
   Compiling getrandom v0.2.4
   Compiling socket2 v0.4.3
   Compiling atty v0.2.14
   Compiling http-body v0.4.4
   Compiling cc v1.0.72
   Compiling prost-build v0.8.0
   Compiling petgraph v0.5.1
   Compiling regex v1.5.4
   Compiling parking_lot v0.11.2
   Compiling rand_core v0.6.3
   Compiling clap v2.34.0
   Compiling idna v0.2.3
   Compiling openssl-src v111.17.0+1.1.1m
   Compiling rand_chacha v0.3.1
   Compiling cipher v0.3.0
   Compiling universal-hash v0.4.1
   Compiling aead v0.4.3
   Compiling libz-sys v1.1.3
   Compiling libgit2-sys v0.12.26+1.3.0
   Compiling openssl-sys v0.9.72
   Compiling polyval v0.5.3
   Compiling ctr v0.8.0
   Compiling aes v0.7.5
   Compiling rand v0.8.4
   Compiling chrono v0.4.19
   Compiling env_logger v0.9.0
   Compiling url v2.2.2
   Compiling ghash v0.4.4
   Compiling aes-gcm v0.9.4
error: failed to run custom build command for `openssl-sys v0.9.72`

Caused by:
  process didn't exit successfully: `/home/wmoschet/src/confidential-containers/attestation-agent/target/release/build/openssl-sys-e7f514af3199ed08/build-script-main` (exit status: 101)
  --- stdout
  cargo:rustc-cfg=const_fn
  cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_NO_VENDOR
  X86_64_UNKNOWN_LINUX_GNU_OPENSSL_NO_VENDOR unset
  cargo:rerun-if-env-changed=OPENSSL_NO_VENDOR
  OPENSSL_NO_VENDOR unset
  CC_x86_64-unknown-linux-gnu = None
  CC_x86_64_unknown_linux_gnu = None
  HOST_CC = None
  CC = None
  CFLAGS_x86_64-unknown-linux-gnu = None
  CFLAGS_x86_64_unknown_linux_gnu = None
  HOST_CFLAGS = None
  CFLAGS = None
  CRATE_CC_NO_DEFAULTS = None
  DEBUG = Some("false")
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  running "perl" "./Configure" "--prefix=/home/wmoschet/src/confidential-containers/attestation-agent/target/release/build/openssl-sys-61d7e18d88715ea1/out/openssl-build/install" "no-dso" "no-shared" "no-ssl3" "no-unit-test" "no-comp" "no-zlib" "no-zlib-dynamic" "no-md2" "no-rc5" "no-weak-ssl-ciphers" "no-camellia" "no-idea" "no-seed" "linux-x86_64" "-O2" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64"
  Configuring OpenSSL version 1.1.1m (0x101010dfL) for linux-x86_64
  Using os-specific seed configuration
  Creating configdata.pm
  Creating Makefile

  **********************************************************************
  ***                                                                ***
  ***   OpenSSL has been successfully configured                     ***
  ***                                                                ***
  ***   If you encounter a problem while building, please open an    ***
  ***   issue on GitHub <https://github.com/openssl/openssl/issues>  ***
  ***   and include the output from the following command:           ***
  ***                                                                ***
  ***       perl configdata.pm --dump                                ***
  ***                                                                ***
  ***   (If you are new to OpenSSL, you might want to consult the    ***
  ***   'Troubleshooting' section in the INSTALL file first)         ***
  ***                                                                ***
  **********************************************************************
  running "make" "depend"
  make[1]: Entering directory '/home/wmoschet/src/confidential-containers/attestation-agent/target/release/build/openssl-sys-61d7e18d88715ea1/out/openssl-build/build/src'
  make[1]: Leaving directory '/home/wmoschet/src/confidential-containers/attestation-agent/target/release/build/openssl-sys-61d7e18d88715ea1/out/openssl-build/build/src'

  --- stderr
  Can't locate File/Compare.pm in @INC (you may need to install the File::Compare module) (@INC contains: . /usr/local/lib64/perl5/5.32 /usr/local/share/perl5/5.32 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at ./util/add-depends.pl line 16.
  BEGIN failed--compilation aborted at ./util/add-depends.pl line 16.
  make[1]: *** [Makefile:264: depend] Error 2
  thread 'main' panicked at '


  Error building OpenSSL dependencies:
      Command: "make" "depend"
      Exit status: exit status: 2


      ', /home/wmoschet/.cargo/registry/src/github.com-1ecc6299db9ec823/openssl-src-111.17.0+1.1.1m/src/lib.rs:479:13
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed
make: *** [Makefile:51: build] Error 101

Cut a release for the first release of CC

As mentioned in kata-containers/kata-containers#4764 (comment) we should cut a release of the Attestation Agent to use with the first release of Confidential Containers.

Currently our Cargo.toml file has the version set to 1.0.0. We should probably change this to 0.1.0 or even 0.0.1.

Do we have everything we need for the first release? I think we do, although I would like to add in a couple more SEV features, such as the online_sev_kbc with get_resource support.

Extract AA core functions as a rust library to support enclave-cc

In the architecture of enclave-CC, sometimes only one process can run in enclave (such as Gramine), so the core function of AA needs to be integrated into the agent program as a rust library crate. But the current implementation of AA is a grpc service application. Therefore, if AA want to support enclave-CC, we need to transform the current AA architecture and separate the core function (including KBCs) and the part of grpc service into two rust crates, the former as library crate and the latter as application crate.

I'm not sure about the description of this work in the current development plan of enclave-CC. It may take some time and some tests to complete this work, but it may be a necessary work for AA to support enclave CC.

FYI @sameo @fitzthum @jodh-intel @jiazhang0

SEV-SNP Support in CC-KBC

This is an extension of #239

The generic KBC will need to support SEV-SNP meaning that it will need to get the attestation report from the PSP and format the evidence. The evidence will include the attestation report. If we use measured direct boot, the evidence should also include the hash of the kernel, initrd, and kernel cmdline.

Add integration test for signature verification with sample_kbc

Following on from the work in confidential-containers/attestation-agent#69 we should be able to add a new integration test to integration.rs that covers using sample_kbc through the attestation-agent to provide the files/config needed to test signature verification. This should cover the positive and negative cases.
If it's helpful we can re-use the signed images and artefacts used in the 'unit test': in https://github.com/confidential-containers/image-rs/blob/main/src/image.rs#L251 - the test_data for this is in https://github.com/confidential-containers/image-rs/tree/main/test_data/simple-signing-scheme

This integration test would help provide the baseline to check that the get_resource endpoint feature in sample_kbc works and integrates with image-rs

Acceptance Criteria

Scenario: Pulling a non-signed image, from an 'unprotected registry' works
Given I have image-rs configured with a policy.json, sigstore config, public GPG key and signature claim file valid for container registry quay.io/kata-containers/confidential-containers
When I call the pull_image method with a valid image URI from a different container registry repository that is not signed eg quay.io/prometheus/busybox:latest
Then The image-rs image pull operation succeeds.

Scenario: Pulling a non-signed image, from a 'protected registry' is rejected
Given I have image-rs configured with a policy.json, sigstore config, public GPG key and signature claim file valid for container registry quay.io/kata-containers/confidential-containers
When I call the pull_image method with a valid, unsigned image URI from the quay.io/kata-containers/confidential-containers container registry repository quay.io/kata-containers/confidential-containers:unsigned
Then The image-rs image pull operation fails
And Hopefully there is an error message to explain what happened

Scenario: Pulling a signed image, from a 'protected registry' succeeds
Given I have image-rs configured with a policy.json, sigstore config, public GPG key and signature claim file valid for container registry quay.io/kata-containers/confidential-containers
When I call the pull_image method with a valid, signed, image URI that is signed from the quay.io/kata-containers/confidential-containers container registry repository quay.io/kata-containers/confidential-containers:signed
Then The image-rs image pull operation should succeed.

Scenario: Pulling a signed image with mismatched key fails
Given I have image-rs configured with a policy.json, sigstore config, public GPG key and signature claim file valid for container registry quay.io/kata-containers/confidential-containers
When I call the pull_image method with a valid image URI that is signed with a different gpg key from the quay.io/kata-containers/confidential-containers container registry repository quay.io/kata-containers/confidential-containers:other_signed
Then The image-rs image pull operation fails
And Hopefully there is an error message to explain what happened

CI workflow is failing

The CI workflow: https://github.com/confidential-containers/image-rs/actions/workflows/ci.yml is failing. I think there are probably multiple issues, similar/caused by the ones I've recently addressed in ocicrypt-rs, but the first one is:

Error: failed to run custom build command for `ocicrypt-rs v0.1.0 (https://github.com/confidential-containers/ocicrypt-rs?rev=251ed40822f4d243a59bdd395cccdcbae2bca2be#251ed408)`
Caused by:
  process didn't exit successfully: `/home/runner/work/image-rs/image-rs/target/debug/build/ocicrypt-rs-a267de6f46cecaa7/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-changed=src/utils/proto/keyprovider.proto
  cargo:rerun-if-changed=src/utils

  --- stderr
  thread 'main' panicked at '
  Could not find `protoc` installation and this build crate cannot proceed without
  this knowledge. If `protoc` is installed and this crate had trouble finding
  it, you can set the `PROTOC` environment variable with the specific path to your
  installed `protoc` binary.

  For more information: https://docs.rs/prost-build/#sourcing-protoc
  ', /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.11.1/src/lib.rs:1227:10
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...

Async KBC trait?

I've been working on an online_sev_kbc and I'm realizing that it is quite a pain to make async gRPC requests from the KBC module because the KBC trait methods are not async. It seems like this might be a problem for the CC-KBC as well. Do you have any plan for this? @jialez0

[RFC] Attestation Agent development plan

According to the architecture described in the Attestation Agent Proposal (issue #254):

1

we divide the development work into the following stages:

Step1: Support the keyprovider call of ocicrypt-rs as a gRPC service

  • Support key provider protocol defined by ocicrypt-rs

  • Run as a long-live gRPC service

Step2: KBC modularization

  • Define the interaction interface between AA public code and KBC module (as a pub trait)

  • Provide a sample KBC module and a sample KBS (use hard coded key)

  • Improve the sample KBC/KBS using peer-to-peer gRPC service.

Note:Step 1 & 2 are specific to AA archtecture, and step 3 & 4 are specific to AA KBC modularization.

Step3: Implement E2E attestation demo for kata CC v0 (before Nov. 1)

  • Configurable selection between gRPC and ttrpc.

  • Implement the first group of KBC modules:

  • SEV(-ES) pre-attestation

  • A KBC module(s) for TDX (ISECL, EAA or something else?)

  • etc

Step4: Add more KBC modules

  • ISECL (Intel)

  • GOP (IBM)

  • EAA (Alibaba)

  • UnwrapKey Relaying KBCs

  • Cloud HSMs

  • etc

offline_fs_kbc: Add support for get_resource API to enable signature verification with local attestation

As a user of the attestation-agent with the offline_fs_kbc module
I want to be able to get the policy, sigstore config and public GPG details
so that I can use them for signature verification

Description

In order to remove the dependency on Confidential containers skopeo for our local/offline attestation scenario, one of the things we want, is for the offline_fs_kbc KBC to support signature verification through image-rs (see #kata-containers/kata-containers4581 for more information).

To do this I believe the Offline filesystem KBC needs to handle the Policy, Sigstore Config and GPG keyring getResource requests. At the moment it only seems to support the decrypt_payload API, so we want to extend it to support the get_resource interface as well, like the eaa_kbc does. I'm guessing that this requirement is also applicable to the offline_sev_kbc as well, so that might need an equivalent issue.

In absence of a better design (which might exist and I just can't find), we could add a new file, called something like aa-offline_fs_kbc-resources.json (as a companion to aa-offline_fs_kbc-keys.json) and read the requirement fields from there based on their field names.
We might want to consider if a single json file should support multiple public gpg keys, but I'm not sure if the image-rs code base that we want to integrate with handles these. I'd be interested to know how EAA/verdictd are configured and if there is anything we could learn from them.

Update

Since I originally wrote this issue, similar function has been written for the sample_kbc, so using confidential-containers/attestation-agent#69 as a base is probably a good way to start and the code not too different to what I'd expect we need.

Cut a 0.0.1 release of image-rs

image-rs is currently being used as part of the CCv0 branch of Kata Containers. However, we're importing main there, and this is very much error prone and not suitable for the moment when we start working on reproducible builds (a hard requirement for Confidential Containers).

With this in mind, please, let's cut a 0.0.1 release with a commit that's known to be working, and we can use that as part of Kata Containers.

Kata Containers link: https://github.com/kata-containers/kata-containers/blob/cfa3e1e9334dedeb3e28440e6b3c3d268fc2ea14/src/agent/Cargo.toml#L68

[RFC] image-rs devel plan

As the design and architecture PR described, the figure below shows the proposed module blocks:
image

The development work of image-rs can be separated as four steps:

Step1: Basic features

In step1, image-rs will implement the basic features and remove the dependency of umoci in current CC rustified image download stack. image-rs can leverage oci-distribution to pull container image, but we also need fill related gaps like manifest list support, public auth, pull_layer API to avoid blocking flowing decryption/decompression operations.

  • Pull: Fill gaps in oci-distribution
    • manifests list support
    • public auth, pull_layer API
    • define security friendly ImageData struct
  • Security
    • leverage ocicrypt-rs for decryption
    • support image signature verification
    • policy module to support different image security policy
  • Store
    • unpack module support tar and gzip decompression
    • snapshot module support overlay2
  • Integration with Kata agent

Step2: Performance enhancement

In this step, image-rs will support zstd decompression and for CPU-bound operations like decrypt/decompression to choose the right parallel and instruction set based acceleration libraries. If these libraries don't have rust version, a rust binding version will be created as a seperate crate which can be used by other projects.

  • Compress algorithm: zstd
  • Performance benchmark for decrypt and decompression
  • Parallel libraries
  • Instruction set based acceleration libraries
  • HW Accelerator (TBD: May not support in CC environment)

Step3: Advanced features

Develop a snapshotter to support container image on demand pull/decrypt, for image layer caching or sharing, it is TBD depends on the security model.

  • Define on demand pull/decrypt required manifest standard
  • Snapshotter: support on demand pull
  • Image layer sharing between containers (TBD)

Step4: Full Features (optional)

Depends on the requirement of current CC solution, metadata databased will be implemented to support other CRI image service based API.

  • Meta database

AA's Makefile need to support LIBC and DESTDIR options

Currently kata agent's Makefile supports LIBC and DESTDIR options. So rootfs build script can call the makefile with designated libc option and installation destination. Like:
make LIBC=${LIBC} INIT=${AGENT_INIT} SECCOMP=${SECCOMP}
make install DESTDIR="${ROOTFS_DIR}" LIBC=${LIBC} INIT=${AGENT_INIT}

So I think AA's Makefile should also support this functionality.

Support cross-compiling the attestation agent for s390x on an x86_64 build machine

This work should follow on from issue #237, after the attestation agent Makefile supports a native build for s390x.

There is currently some support for cross-compiling the attestation-agent offline filesystem KBC module for s390x on x86_64 through the rootfs-builder script from the CCv0 fork of the kata containers project.

It would be good to, after adding support for native s390x builds to the attestation agent Makefile we extend that so that cross-compiling s390x binaries for the attestation agent on x86_64 machines is supported, e.g., by running make ARCH=s390x on an x86_64 build machine.

So far, my attempts to cross-compile the attestation-agent for s390x on a x86_64 machine without using the rootfs-builder script have not produced a binary that runs without segfaulting when it is run on an s390x build machine. I have, however, been able to cross compile the attestation for s390x on x86_64 using the rootfs-builder script rootfs.sh. rootfs.sh is used to build everything a Kata VM needs, not just the attestation agent, so we need to discover what that script is setting up that allows the cross-compiled attestation agent binary to be runable that my attempts to cross-compile the attestaion agent without using a rootfs.sh are missing.

[RFC] Attestation Agent Proposal

Edit

  • Add KBC module "EAA" (Enclave Attestation Architecture).
  • Use the term "KBC" instead of "KBS module/plugin" to indicate the component integrated to AA for communicating with platform specific KBS
  • Update collected suggestions from @hdxia and @fitzthum
  • Rename SEV local attestation to SEV pre-attestation
  • Add KBS modularization diagram

Summary

This proposal provides the implementation of attestation agent, targeting to facilitating a E2E attestation reference implementation for kata CC v0. This RFC reveals the details of encryption and decryption procedure, and introduces the design options to implement the attestation agent for kata CC v0.

Background

For kata cc v0 architecture, encryption and decryption operations are performed by different entities.

During image encryption, image creation tools, such as skopeo, buildah and ctr-enc, calls ocicrypt which essentially uses Layer Encryption Key (LEK) to encrypt the image layer. In the underlying implementation, ocicrypt generates LEK randomly, and serializes it and its encryption parameters into PrivateLayerBlockCipherOptions object, then encrypt the PrivateLayerBlockCipherOptions (PLBCO for short) object through calling the WrapKey API defined by key provider protocol. The details of the encryption process and the returned annotation packet containing the encrypted PrivateLayerBlockCipherOptions object are all determined by the implementation of Key Broker Service (KBS for short). Eventually, the annotation packet is stored in the image layer's annotations field (for example, the annotation ID can be org.opencontainers.image.enc.keys.provider.kata_cc_key_broker_foo).

In the process of image decryption, kata-agent calls ocicrypt-rs to retrieve the plain/decrypted PrivateLayerBlockCipherOptions object. In the underlying implementation, ocicrypt-rs calls the UnWrap API implemented by Attestation Agent (AA for short). AA needs to access KBS according to the parameters in annotation packet to decrypt the PrivateLayerBlockCipherOptions Object, and then return the decrypted PrivateLayerBlockCipherOptions Object to ocicrypt-rs as the return value of UnWrap API. Eventually, ocicrypt-rs decrypts the encrypted image layer using LEK from PrivateLayerBlockCipherOptions object.

The above workflow is especially suitable for remote attestation procedure that supports dynamical measurement such as Intel TDX.

remote

For the pre-attestation procedure of SEV/SEV-ES which only supports static measurement, AA only needs to access guest FW (or a kernel module driver) to get the plain PrivateLayerBlockCipherOptions object in the guest which is provisioned in the pre-attestation stage. This is so-called pre-attestation (or local attestation mentioned in P19 https://docs.google.com/presentation/d/1469nSRFtlHMSDDDWVLj0i21dR9M_3SO76ehQsyVSTUk/edit#slide=id.gdccf80c723_0_1261).

pre

From long term, AA is far more functional than just doing the decryption of PrivateLayerBlockCipherOptions object. For example, it can periodically report the TCB status to relying party. Therefore, it is necessary to implement AA as a long-live service. This proposal suggests to implement AA as gRPC endpoint, instead of a standalone binary program.

Goal

The AA is specially designed for kata cc architecture, so it's initial goal is to decrypt the PrivateLayerBlockCipherOptions object according to the input parameters defined by the key provider protocol. This is the only high level function that AA must implement in v0 architecture. This also means that AA does not need to implement the WrapKey API.

Further approaches includes:

  • Implement UnWrap API defined by key provider protocol
    Deserialize and parse the input parameters of UnWrap API, and serialize and return PrivateLayerBlockCipherOptions to ocicrypt-rs.

  • Support remote attestation and pre-attestation
    Specifically, certain HW-TEE requires to obtain PrivateLayerBlockCipherOptions through pre-attestation, and others requires to do it through remote attestation.

  • Abstract the precedure of PrivateLayerBlockCipherOptions decryption
    The precedure of PrivateLayerBlockCipherOptions decryption is implemented by KBS, and is also related to the access to attestation service.

Internal

Parse input parameter of UnWrapKey API

The format of input parameter of UnWrapKey API is KeyProviderKeyWrapProtocolInput.

{
  "op": "keyunwrap",
  "keyunwrapparams": {
    "dc": "$scheme[:$parameters]",
    "annotation": "$KBS_specific_annotation_packet"
  }
}

where:

  • $scheme and optional $parameters are specific to the implementation of the callee of ocicrypt-rs. In ocicrypt, $schemeis specified in key provider configuration file, and $extra_parameters is specified in command line of image creation tools such as ctr-enc, skopeo and buildah. See examples for the details. In kata CC, the configuration file may use the pattern "kata_cc_attestation:$mode" as preferred, where $mode is either local or remote, corresponding to pre-attestation and remote attestation.

  • $extra_parameters contains the KBS specific annotation packet. A good example from keyprovider test program shows the format of $KBS_specific_annotation_packet is generated during image encryption and stored in layer annotation. Its format is specific to the implementation of KBS.

Example:

{
  "op": "keyunwrap",
  "keyunwrapparams": {
    "dc": "kata_cc_attestation_agent:kbc=my_kbc",
    "annotation": "{ \"url\": \"https://$domain:port/api/getkey\", \"keyid\": \"foo\", \"payload\": \"encrypted_PLBCO\" }"
  }
}

Handle the return value of UnWrapKey API

No matter what method AA uses to obtain the plain/decrypted PrivateLayerBlockCipherOptions object, AA needs to serialize the PrivateLayerBlockCipherOptions object into the following JSON object and use it as the return value of UnWrappKey API.

{
  "keyunwrapresults": {
    "optsdata": "#plain_PLBCO"
  }
}

Abstract the procedure of PrivateLayerBlockCipherOptions decryption

Deem KBS as a service providing the capability of PrivateLayerBlockCipherOptions decryption. KBS can be abstracted as one of the following types (but not limited to):

  • Relying party model with remote attestation
  • Guest FW model with pre-attestation
  • Others: cloud HSM ...

There are two options to implement this abstraction.

Option 1:KBC (Key Broker Client) modularization

AA

KBS is platform specific implementation, so AA needs to define and implement a modularization framework to allow platform providers to communicate with their own KBS infrastructure through a corresponding KBC integrated to AA.

In this scheme, each KBC module needs to realize the following functions:

  • Function 1: implement a platform specific client for KBS.
    AA doesn't need to care about the detail of communication protocol between KBS and KBC. The KBC selection can be done in this way:

    {
      "op": "keyunwrap",
      "keyunwrapparams": {
        "dc": "kata_cc_attestation_agent:kbc=my_kbc",
        "annotation": "{ \"url\": \"https://$domain:port/api/getkey\", \"keyid\": \"foo\", \"payload\": \"encrypted_PLBCO\" }"
      }
    }
  • Function 2: define and implement the communication protocol between KBS and KBC.
    Include application protocol, transport type, API scheme, input and output parameters, etc.

  • Function 3: implement the corresponding attester logic for all potentially supported HW-TEE types
    AA, as the role defined by RATS architecture, is responsible for collecting evidence about the TCB status from the attesting environment and reporting it to the verifier or relying party for verification. The purpose is to convince tenant that the workload is indeed running in a genuine HW-TEE. In order to establish the binding between evidence (called quote in TDX) and user-defined data structure (aka Enclave Held Data, EHD for short), the hash of EHD is embedded into evidence and then the evidence plus EHD is sent to remote peer. Usually, EHD is a public key used for wrapping a secret.

Option 2:AA-KBS E2E

Explicitly provide an implementation of AA and KBS for kata CC.

In this scheme, AA will eventually implement all KBS types mentioned above according to the requirements. The function 2 and 3 belong to the internal details between AA and KBS.

Comparison

  • Compared with option 2, option 1 asks each KBC module of remote attestation to implement all potential HW-TEE attester logic. In fact this is a waste, and the attester logic for a specific HW-TEE just needs to implement once.

  • KBS is platform specific implementation, so option 1 offers the greatest flexibility compared with option 2, and AA doesn't need to care about any implementation details of KBS (for example, AA doesn't need to parse the annotation data in the input parameter of UnWrap API).

  • Cloud HSM KBC is also implement specifically, so the Cloud HSM KBC in option 1 actually needs to implement a modular subsystem to support different Cloud HSM.

  • At present, most of existing implementation of attester is written in non-Rust, so option 2 asks AA to integrate potential unsafe codes. This problem is raised especially for the software running in HW-TEE. Although option 1 has a similar problem, at least KBC module and AA are separated, and the platform providers focusing on security will try their best to use rust to implement the KBC module from a long term.

Reference

Collected Suggestions

  • A Status() method to the KBS API would be useful to handle slow remote attestations and attestation failures. - by @jimcadden

  • One extension to the integrity model for layer encryption would be to verify the decrypted PLBCO (e.g., with a digest provided by the KSM) prior to decrypting the layer. Although, this seems like it can be contained to the implementation of an KBC. - by @jimcadden

  • The AA will send the keyID of the KEK to KBS and KBS releases the wrapped KEK to AA (AA needs to generate a pub/priv so that the KBS can wrap the KEK using AA's pub key for protection). Once the AA gets the KEK, it can locally unwrap the LEK and feed to ocicrypt. The advantage of passing the KEK to AA is that the KEK is usually shared among multiple layers and once AA retrieves the KEK, it can cache it to avoid multiple round trips to KBS to decrypt each LEK. - by @hdxia

  • SEV-SNP does not support pre-attestation in the same way that SEV(-ES) does. One thing I was trying to get across in the previous post is that pre-attestation and SNP attestation both provide the launch measurement to the GOP/KBS in exchange for a key. This should be fairly similar to the TDX approach except that the SEV-SNP measurement has slightly different properties (meaning that we might need additional support to measure containers). - by @fitzthum

Different policy.json layout from containers/image

As containers/image published new field in policy.json to support sigstoresign(including cosign), this format is a little different from ours.

Theirs is like

{
    "type":    "sigstoreSigned",
    "keyPath": "/path/to/local/keyring/file",
    "keyData": "base64-encoded-keyring-data",
    "signedIdentity": identity_requirement
}

And ours do not change type field when this Policy Requirement is a signing scheme. If cosign is supported by us, the format will be like

{
    "type":    "signedBy",
    "scheme": "cosign",
    "keyPath": "/path/to/local/keyring/file",
    "keyData": "base64-encoded-keyring-data",
    "signedIdentity": identity_requirement
}

The scheme field is brought in #31. Because the type may include insecureAcceptAnything and reject, we introduced scheme field to show that "this policy is to verify signature, and the signing scheme is XXX" rather than extended type.

The question is whether we should modify our existing layout of Policy Requirement to follow the containers/image's?
Our layout scheme is better for

  • More understandable and extensible for new signing schemes and Policy Requirements.
  • Our pr is earlier than the containers/image's.
    However, theirs are more popular and if we carry on ours, there may be compatibility issues.

Originally posted by @Xynnn007 in #39 (comment)

[RFC] Init Time Configuration for image-rs

[RFC] Init Time Configuration for image-rs

Policy File for image-rs

The policy file is a critical component of the image-rs container image integrity protection scheme. It contains the configuration information such as public keys to be used to verify container image signature or container image repo identity. A forged policy file would allow an attacker to inject unauthorized containers into the Confidential Container guest VM.

In the current implementation, the policy file is either part of the guest VM image or downloaded by image-rs from a KBS.

When the policy file is provided as part of the guest VM image, its content is covered by the guest VM image measurement, reflected in Attestation. Any tampering of the expected policy file would be detected by the Rely Party through the Attestation. The drawback of this approach is that a K8S customer would need to customize the guest VM image, typically managed/maintained by the Service Provider, to enforce deployment-specific and/or Pod-specific policy. Service Provides can offer a deployment time interface to accept and integrate customer-provided policy file into the guest VM image, before initializing the guest VM for the Confidential Container deployment, but such design still presents a challenge for the Relying Party and the customer. The measurement of the guest VM image would change, based on the policy file integrated, putting a burden on the Relying Party and the customer to track and manage the mapping between the expected measurement value and the deployed Pod.

When the policy file is to be downloaded from a KBS, the authenticity of the policy file provided by the KBS needs to be verified. Typically, it can be accomplished by verifying a signature applied to the policy file by a trusted KBS, or by authenticating the KBS identity as part of a protected transport protocol establishment, such as TLS. Regardless of the authentication solution used, the image-rs needs to be configured with a trusted KBS public key. The trusted KBS public key can be integrated into the guest VM image as discussed above. The challenges with regard to expected measurement value remains.

SEV-SNP and TDX/SGX Init Time Configuration for image-rs Configuration

AMD(r) SEV-SNP supports a 256-bit Host_Data when initializing the SEV-SNP protected guest VM. The Host_Data is set by the Host in the SNP_LAUNCH_FINISH command. The SEV-SNP firmware does not interpret this value, but includes this value in SEV-SNP ATTESTATION_REPORT, in parallel with Launch Measurement. More details are discussed in SEV Secure Nested Paging Firmware ABI Specification section 8.18 and section 7.3.

Similarly, Intel(r) TDX supports a 384-bit host provided MRCONFIGID in the SNP_LAUNCH_FINISH command, for runtime of OS configuration. MRCONFIGID is part of TDX ATTESTATION_REPORT, and does not affect MRTD. More info on TDX MRCONFIGID is available in Architecture Specification: Intel(r) Trust Domain Extensions (Intel(r) TDX) Module section 11.2, Section 20.2.4 and Section 20.6.5.

Host_Data or MRCONFIGID can be utilized to provide configuration data to the guest VM at VM initialization time, without affecting the measurement value of the guest VM image. The Host can set Host_Data/MRCONFIGID to the sha256 hash of the policy file (with 0-padding appended to fill the 384-bit or 512-bit field). After the guest VM is initialized, image-rs running inside the protected guest VM can retrieve the ATTESTATION REPORT from the TEE HW, and verify the hash of the policy file provided to it through a Pod deployment time interface, for example, an Annotation field in the Pod YAML file, matches the value inside the ATTESTATION REPORT. Only after the verification, image-rs will utilize the information inside the provided policy file.The Rely Party would verify the Host_Data/MRCONFIGID in the ATTESTATION_REPORT matches the expected value, in addition to checking for the known/trusted guest VM image measurement value.

Intel(r) SGX 2.0 also supports a 512-bit OS provided CONFIGID and includes it in SGX REPORT. So enclave-cc can also utilize the same mechanism for image-rs policy file configuration.

For TEEs that do not support init time configuration, integrating the policy file in the guest VM image is still required.

Support different signature schemes.

As mentioned in #6 , There are multiple image signing and verification protocols/solutions available now, and there’s no standardization work for image signing yet. So in the image-rs of confidential containers, we should support different signature mechanisms with modular implementation.

In order to achieve this goal, we should first reconstruct the current signature verification function (#5 ) and modularize it to provide scalability for supporting more signature schemes. Specifically, the functions that need modularization include:

  1. Getting signature: In different signature schemes, the storage and distribution methods of signature are different.
  2. Parsing signature payload: Different signature schemes may use different signature payload formats.
  3. Cryptography signature verification: Different signature schemes may use different signature algorithms.

After modularizing the above functions, we should add more signature schemes to support the verification of images signed by different schemes, such as:

  1. Image signed with Red Hat simple signing scheme.
  2. Images signed with cosign signature tool.
  3. Image signed with Alibaba cloud ACR image signing scheme.
  4. ETC.

Resolve missing URI scheme issue

Writing tests for confidential-containers/attestation-agent#29 uncovered the fact that the KBS URI is not technically a URI since it doesn't include a scheme.

We need to resolve which scheme is appropriate. We're using ttrpc so is it http, https, h2c, h2, other?

A minimal diff once we've decided is something like:

diff --git a/Cargo.toml b/Cargo.toml
index 1da653a..5200469 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -22,6 +22,7 @@ lazy_static = "1.4.0"
 string-error = "0.1.0"
 foreign-types = "0.5.0"
 openssl = { version = "0.10", optional = true, features = ["vendored"]}
+url = "2.2.2"
 
 [build-dependencies]
 tonic-build = "0.5"
diff --git a/src/grpc/mod.rs b/src/grpc/mod.rs
index 51820bf..dec2e94 100644
--- a/src/grpc/mod.rs
+++ b/src/grpc/mod.rs
@@ -28,6 +28,7 @@ const ERR_ANNOTATION_NOT_BASE64: &str = "annotation is not base64 encoded";
 const ERR_DC_EMPTY: &str = "missing Dc value";
 const ERR_KBC_KBS_NOT_BASE64: &str = "KBC/KBS pair not base64 encoded";
 const ERR_KBC_KBS_NOT_FOUND: &str = "KBC/KBS pair not found";
+const ERR_KBS_URI_INVALID: &str = "KBS URI is invalid";
 const ERR_NO_KBC_NAME: &str = "missing KBC name";
 const ERR_NO_KBS_URI: &str = "missing KBS URI";
 const ERR_WRONG_DC_PARAM: &str = "Dc parameter not destined for agent";
@@ -222,6 +223,9 @@ fn str_to_kbc_kbs(value: &str) -> Result<(String, String)> {
             return Err(anyhow!(ERR_NO_KBS_URI));
         }
 
+        let _ =
+            url::Url::parse(kbs_uri).map_err(|e| anyhow!("{}: {:?}", ERR_KBS_URI_INVALID, e))?;
+
         Ok((kbc_name.to_string(), kbs_uri.to_string()))
     } else {
         Err(anyhow!(ERR_KBC_KBS_NOT_FOUND))
@@ -688,6 +692,10 @@ mod tests {
                 value: "::foo",
                 result: Err(anyhow!(ERR_NO_KBC_NAME)),
             },
+            TestData {
+                value: "foo::bar",
+                result: Err(anyhow!(ERR_KBS_URI_INVALID)),
+            },
             TestData {
                 value: "foo::https://foo.bar.com/?silly=yes&colons=bar:::baz:::wibble::",
                 result: Ok((

Unit test fails with a SEGV fault

root@d277f7789829:/image# cargo test
Compiling image-rs v0.1.0 (/image)
Finished test [unoptimized + debuginfo] target(s) in 2m 59s
Running unittests (target/debug/deps/image_rs-873fc8095aed68ff)

running 9 tests
test config::tests::test_image_config ... ok
test decoder::tests::test_uncompressed_decode ... ok
test config::tests::test_image_config_from_file ... ok
test decoder::tests::test_zstd_decode ... ok
test decoder::tests::test_gzip_decode ... ok
test bundle::tests::test_bundle_create_config ... ok
test unpack::tests::test_unpack ... ok
test image::tests::test_pull_image ... FAILED
error: test failed, to rerun pass '--lib'

Caused by:
process didn't exit successfully: /image/target/debug/deps/image_rs-873fc8095aed68ff (signal: 11, SIGSEGV: invalid memory reference)

offline_fs_kbs: error[E0308]: mismatched types

[root@xxx sample_kbs]# cargo run --release --features offline_fs_kbs -- --grpc_sock 127.0.0.1:50000
   Compiling sample_kbs v1.0.0 (/root/yanrong/attestation-agent/sample_kbs)
error[E0308]: mismatched types
  --> src/grpc/mod.rs:72:9
   |
72 |         Ok(Response::new(reply))
   |         ^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `Status`, found struct `anyhow::Error`
   |
   = note: expected enum `std::result::Result<_, Status>`
              found enum `std::result::Result<_, anyhow::Error>`
note: return type inferred to be `std::result::Result<tonic::Response<KeyProviderKeyWrapProtocolOutput>, Status>` here
  --> src/grpc/mod.rs:29:69
   |
29 |       ) -> Result<Response<KeyProviderKeyWrapProtocolOutput>, Status> {
   |  _____________________________________________________________________^
30 | |         let input_string =
31 | |             String::from_utf8(request.into_inner().key_provider_key_wrap_protocol_input).unwrap();
32 | |         debug!("WrapKey API Request Input: {}", input_string);
...  |
72 | |         Ok(Response::new(reply))
73 | |     }
   | |_____^

error: aborting due to previous error

For more information about this error, try `rustc --explain E0308`.
error: could not compile `sample_kbs`

To learn more, run the command again with --verbose.

[Security Risk] The distribution risk of security policy config file.

Under the security mechanism of image-rs, the role of image security policy file is to restrict the image pulling action and ensure that the pulled and deployed images come from the repo allowed by the owner, and some need to be protected by signature.

In the current implementation, the default location of the policy file is the /run directory. When image-rs pulls the image, it will first check whether the policy file already exists in the /run directory. If it does not exist, it will be obtained from KBS through remote attestation and written to the /run directory. This mechanism needs to ensure that the /run directory of pod is mounted in HW-TEE’s encrypted memory, otherwise there will be the following risks: malicious system administrators configure pod in advance to mount the /run directory into a specified rootfs directory, and then place malicious policy file in rootfs in advance, resulting in the failure of normal distribution of owner's policy files.

I'm not sure how to ensure that the /run directory of pod is mounted in the encrypted memory of HW-TEE, or whether there is a better storage location to ensure this?

KBS Url and KBC name mapping's configuration

Previously we have discussed and reached agreement that Attestation Agent used KBS Url and KBC name mapping should be passed via key provider protocol. So this issue is targeted to discuss how to configure and store this mapping configuration.

According to @sameo suggestions, we have following 3 options:

  • Pass the selected KBC as a pod annotation. The kata-agent would get that information and then pass it down to the AA.
  • Use a guest owner defined and measured mapping between platforms (TDX, SEV, etc) and KBC. This could be part of the kata-agent configuration file and would leave it to the kata-agent to detect the hardware platform and then select a KBC.
  • Use a guest owner defined and measured mapping between KBS hosts and KBC. This is fairly similar to the previous option, in the sense that the kata-agent would parse that information and select a KBC to be passed down to the AA.

And according to @fitzthum , the option 1 is simple and should be better to start from.

But we still encounter following questions:

  • What's these configuration's detail format? Like:
  • Who will parse this mapping configuration and pass to AA via key provider protocol ?
  • How to embed kbc_name and kbs_url parameters into current key provider protocol ?

Support native build on s390x platform in the Makefile

This issue is to support an ARCH parameter in the attestation agent Makefile and define the KBC modules and rustc targets we support building with when building natively on an s390x machine.

Building natively on an s390x machine I've tested that when using the s390x-unknown-linux-gnu rustc target the attestation agent with the offline_fs_kbc, offline_sev_kbc and sample_kbc KBC modules can be built.

The build requires the protoc binary from the Protocol buffers project (https://github.com/protocolbuffers/protobuf) to be in the build machine's path. Using the pre-built binary for s390x for version v21.1 of Protocol Buffers worked. You can get it from here.

For the rustc targets, s390x-unknown-linux-gnu is a Tier 2 supported target and s390x-unknown-linux-musl is a Tier 3 supported target. Because s390x-unknown-linux-musl is in a less supported tier I propose we should only support building with the s390x-unknown-linux-gnu rustc target on s390x, so the Makefile should reject an attempt to build the attestation agent on a s390x machine when LIBC=musl is set.

https://doc.rust-lang.org/nightly/rustc/platform-support.html

Build error: in compatibility with hmac change

The Cargo.toml states that it supports hmac ">=0.11", but it is using the NewMac trait that has been removed in v0.12.0.

This results in the error:

error[E0432]: unresolved import `hmac::NewMac`
 --> src/blockcipher/aes_ctr.rs:8:23
  |
8 | use hmac::{Hmac, Mac, NewMac};
  |                       ^^^^^^ no `NewMac` in the root

error[E0308]: mismatched types
  --> src/blockcipher/aes_ctr.rs:54:41
   |
54 |                     hmac.clone().verify(&self.exp_hmac).map_err(|_| {
   |                                         ^^^^^^^^^^^^^^ expected struct `GenericArray`, found struct `Vec`
   |
   = note: expected reference `&GenericArray<u8, UInt<UInt<UInt<UInt<UInt<UInt<UTerm, B1>, B0>, B0>, B0>, B0>, B0>>`
              found reference `&Vec<u8>`

Some errors have detailed explanations: E0308, E0432.
For more information about an error, try `rustc --explain E0308`.
error: could not compile `ocicrypt-rs` due to 2 previous errors

when running cargo build --all-features

Whether the trait should have been removed from a version change 0.11 to 0.12 is another issue, but in the mean time it would be good to fix up the build here.

Add integration test for signature verification with offline_fs_kbc

Once we have implemented support for the get_resource endpoint in confidential-containers/guest-components#235, we can add an integration test for this to check that in works with image-rs correctly and as a demonstrate of how to set it up.

If #43 has been completed by this point, this should probably very simple to implement (or vice-versa)

This might lead to a discussing about unifying some of the local KBC modules, but that's probably a longer-term discussion after we have the code in and working!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.