This section will describe hardware and software prerequisites, installing Confidential Containers with Helm charts, verifying the installation, and running a pod with Confidential Containers.
This is the multi-page printable view of this section. Click here to print.
Getting Started
- 1: Prerequisites
- 1.1: Hardware Requirements
- 1.1.1: CoCo without Hardware
- 1.1.2: Secure Execution Host Setup
- 1.1.3: SEV-SNP Host Setup
- 1.1.4: SGX Host Setup
- 1.1.5: TDX Host Setup
- 1.2: Cloud Hardware
- 1.3: Cluster Setup
- 2: Installation
- 2.1: Customization
- 3: Simple Workload
- 4: Securing Your Workload
1 - Prerequisites
This section will describe hardware and software prerequisites for installing Confidential Containers with Helm charts.
1.1 - Hardware Requirements
Confidential Computing is a hardware technology. Confidential Containers supports multiple hardware platforms and can leverage cloud hardware. If you do not have bare metal hardware and will deploy Confidential Containers with a cloud integration, continue to the cloud section.
You can also run Confidential Containers without hardware support for testing or development.
The Confidential Containers Helm charts, which are described in the following section, do not setup the host kernel, firmware, or system configuration. Before installing Confidential Containers on a bare metal system, make sure that your node can start confidential VMs.
This section will describe the configuration that is required on the host.
Regardless of your platform, it is recommended to have at least 8GB of RAM and 4 cores on your worker node.
1.1.1 - CoCo without Hardware
For testing or development, Confidential Containers can be deployed without any hardware support.
This is referred to as a coco-dev or non-tee.
A coco-dev deployment functions the same way as Confidential Containers
with an enclave, but a non-confidential VM is used instead of a confidential VM.
This does not provide any security guarantees, but it can be used for testing.
No additional host configuration is required as long as the host supports virtualization.
1.1.2 - Secure Execution Host Setup
Platform Setup
This document outlines the steps to configure a host machine to support IBM Secure Execution on IBM Z & LinuxONE platforms. This capability enables enhanced security for workloads by taking advantage of protected virtualization. Ensure the host meets the necessary hardware and software requirements before proceeding.
Hardware Requirements
Supported hardware includes these systems:
- IBM z15 or newer models
- IBM LinuxONE III or newer models
Software Requirements
Additionally, the system must meet specific CPU and kernel configuration requirements. Follow the steps below to verify and enable the Secure Execution capability.
-
Verify Protected Virtualization Support in the Kernel
Run the following command to ensure the kernel supports protected virtualization:
cat /sys/firmware/uv/prot_virt_hostA value of 1 indicates support.
-
Check Ultravisor Memory Reservation
Confirm that the ultravisor has reserved memory during the current boot:
sudo dmesg | grep -i ultravisorExample output:
[ 0.063630] prot_virt.f9efb6: Reserving 98MB as ultravisor base storage -
Validate the Secure Execution Facility Bit
Ensure the required facility bit (158) is present:
cat /proc/cpuinfo | grep 158The facilities field should include 158.
If any required configuration is missing, contact your cloud provider to enable the Secure Execution capability for a machine. Alternatively, if you have administrative privileges and the facility bit (158) is set, you can enable it by modifying kernel parameters and rebooting the system:
-
Modify Kernel Parameters
Update the kernel configuration to include the prot_virt=1 parameter:
sudo sed -i 's/^\(parameters.*\)/\1 prot_virt=1/g' /etc/zipl.conf -
Update the Bootloader and reboot the System
Apply the changes to the bootloader and reboot the system:
sudo zipl -V sudo systemctl reboot -
Repeat the Verification Steps
After rebooting, repeat the verification steps above to ensure Secure Execution is properly enabled.
Additional Notes
- The steps to enable Secure Execution might vary depending on the Linux distributions. Consult your distribution’s documentation if necessary.
- For more detailed information about IBM Secure Execution for Linux, see also the official documentation at IBM Secure Execution for Linux.
1.1.3 - SEV-SNP Host Setup
Platform Setup
The host BIOS and kernel must be capable of supporting AMD SEV-SNP and the host must be configured accordingly.
The SEV Firmware version must be at least version 1.55 in order to have at least version 3 of the Attestation Report. The latest SEV Firmware version is available on AMD’s SEV Developer Webpage. It can also be updated via a platform OEM BIOS update.
The host kernel must be equal to or later than upstream version 6.16.1.
To build just the upstream compatible host kernel, use the Confidential Containers fork of AMDESE AMDSEV. Individual components can be built by running the following command:
./build.sh kernel host --install
1.1.4 - SGX Host Setup
TODO
1.1.5 - TDX Host Setup
Platform Setup
Additional Notes
- For more detailed information about Intel TDX, see also official documentation.
1.2 - Cloud Hardware
If you are using bare metal confidential hardware, you can skip this section.
Confidential Containers can be deployed via confidential computing cloud offerings. The main method of doing this is to use the cloud-api-adaptor also known as “peer pods.”
Some clouds also support starting confidential VMs inside of non-confidential VMs. With Confidential Containers these offerings can be used as if they were bare-metal.
1.3 - Cluster Setup
Confidential Containers requires Kubernetes. A cluster must be installed before installing the Helm charts. Many different clusters can be used but they should meet the following requirements.
- The minimum Kubernetes version is 1.24
- Cluster must use
containerdin version 1.7+ or newer. Note:cri-ois not tested with the Helm charts for baremetal deployments. - At least one node has the label
node.kubernetes.io/worker. - SELinux is not enabled.
- Helm 3.8+ is installed.
Kind and Minikube are not tested anywhere in the project, and those are not encouraged to be used as QEMU is known to not work with them.
2 - Installation
Make sure you have completed the pre-requisites before installing Confidential Containers.
Install CoCo with Helm
Install the CoCo runtime using the Helm chart from the Confidential Containers charts repository.
Install the latest released version:
helm install coco oci://ghcr.io/confidential-containers/charts/confidential-containers \
--namespace coco-system \
--create-namespace
Substitute <VERSION> with the desired release version:
helm install coco oci://ghcr.io/confidential-containers/charts/confidential-containers \
--version <VERSION> \
--namespace coco-system \
--create-namespace
For example, to install version v0.18.0:
helm install coco oci://ghcr.io/confidential-containers/charts/confidential-containers \
--version 0.18.0 \
--namespace coco-system \
--create-namespace
Wait until each pod has the STATUS of Running.
kubectl get pods -n coco-system --watch
For platform-specific installation options (s390x, peer-pods, etc.) and advanced configuration, see the charts repository documentation.
Verify Installation
See if the expected runtime classes were created.
kubectl get runtimeclass
The available runtimeclasses depend on the architecture:
| runtimeclass | Description |
|---|---|
kata-qemu-coco-dev |
Development/testing runtime |
kata-qemu-coco-dev-runtime-rs |
Development/testing runtime (Rust-based) |
kata-qemu-snp |
AMD SEV-SNP |
kata-qemu-tdx |
Intel TDX |
kata-qemu-nvidia-gpu-snp |
NVIDIA GPU with AMD SEV-SNP protection |
kata-qemu-nvidia-gpu-tdx |
NVIDIA GPU with Intel TDX protection |
| runtimeclass | Description |
|---|---|
kata-qemu-coco-dev |
Development/testing runtime |
kata-qemu-coco-dev-runtime-rs |
Development/testing runtime (Rust-based) |
kata-qemu-se |
IBM Secure Execution |
kata-qemu-se-runtime-rs |
IBM Secure Execution (Rust-based) |
| runtimeclass | Description |
|---|---|
kata-remote |
Peer-pods |
Uninstall
To uninstall Confidential Containers and delete the coco-system namespace, run:
helm uninstall coco --namespace coco-system
kubectl delete namespace coco-system
2.1 - Customization
The Helm chart can be customized by passing additional parameters to the helm install command.
Important Notes
- Node Selectors: When setting node selectors with dots in the key, escape them:
node-role\.kubernetes\.io/worker - Namespace: All examples use
coco-systemnamespace. Adjust as needed for your environment - Architecture: The default architecture is x86_64. Other architectures must be explicitly specified
- Comma Escaping: When using
--setwith values containing commas, escape them with\,
Customizing deployment
You can combine architecture values files (with -f) and/or with --set flags for customizations.
Using --set flags
To customize the installation using --set flags, run one of the following commands based on your architecture:
# For x86_64
helm install coco oci://ghcr.io/confidential-containers/charts/confidential-containers \
--set kata-as-coco-runtime.debug=true \
--namespace coco-system \
--create-namespace
# For s390x
helm install coco oci://ghcr.io/confidential-containers/charts/confidential-containers \
-f https://raw.githubusercontent.com/confidential-containers/charts/main/values/kata-s390x.yaml \
--set kata-as-coco-runtime.debug=true \
--namespace coco-system \
--create-namespace
Parameters that are commonly customized (use --set flags):
| Parameter | Description | Default |
|---|---|---|
kata-as-coco-runtime.imagePullPolicy |
Image pull policy | Always |
kata-as-coco-runtime.imagePullSecrets |
Image pull secrets for private registry | [] |
kata-as-coco-runtime.k8sDistribution |
Kubernetes distribution (k8s, k3s, rke2, k0s, microk8s) | k8s |
kata-as-coco-runtime.nodeSelector |
Node selector for deployment | {} |
kata-as-coco-runtime.debug |
Enable debug logging | false |
Structured Configuration (Kata Containers)
The chart uses Kata Containers’ structured configuration format for TEE shims. Parameters set by architecture-specific kata runtime values files:
| Parameter | Description | Set by values/kata-*.yaml |
|---|---|---|
architecture |
Architecture label for NOTES | x86_64 or s390x |
kata-as-coco-runtime.snapshotter.setup |
Array of snapshotters to set up (e.g., ["nydus"]) |
Architecture-specific |
kata-as-coco-runtime.shims.<shim-name>.enabled |
Enable/disable specific shim (e.g., qemu-snp, qemu-tdx, qemu-se, qemu-coco-dev) |
Architecture-specific |
kata-as-coco-runtime.shims.<shim-name>.supportedArches |
List of architectures supported by the shim | Architecture-specific |
kata-as-coco-runtime.shims.<shim-name>.containerd.snapshotter |
Snapshotter to use for containerd (e.g., nydus, "" for none) |
Architecture-specific |
kata-as-coco-runtime.shims.<shim-name>.containerd.forceGuestPull |
Enable experimental force guest pull | false |
kata-as-coco-runtime.shims.<shim-name>.crio.guestPull |
Enable guest pull for CRI-O | Architecture-specific |
kata-as-coco-runtime.shims.<shim-name>.agent.httpsProxy |
HTTPS proxy for guest agent | "" |
kata-as-coco-runtime.shims.<shim-name>.agent.noProxy |
No proxy settings for guest agent | "" |
kata-as-coco-runtime.runtimeClasses.enabled |
Create runtimeclass resources | true |
kata-as-coco-runtime.runtimeClasses.createDefault |
Create default k8s runtimeclass | false |
kata-as-coco-runtime.runtimeClasses.defaultName |
Name for default runtimeclass | "kata" |
kata-as-coco-runtime.defaultShim.<arch> |
Default shim per architecture (e.g., amd64: qemu-snp) |
Architecture-specific |
Additional Parameters (kata-deploy options)
These inherit from kata-deploy defaults but can be overridden:
| Parameter | Description | Default |
|---|---|---|
kata-as-coco-runtime.image.reference |
Kata deploy image | quay.io/kata-containers/kata-deploy |
kata-as-coco-runtime.image.tag |
Kata deploy image tag | Chart’s application version |
kata-as-coco-runtime.env.installationPrefix |
Installation path prefix | "" (uses kata-deploy defaults) |
kata-as-coco-runtime.env.multiInstallSuffix |
Suffix for multiple installations | "" |
See quickstart for complete customization examples and usage.
Using file based values
Prepare my-values.yaml file in one of the following ways:
-
Using latest default values downloaded from the chart:
helm show values oci://ghcr.io/confidential-containers/charts/confidential-containers > my-values.yaml -
Using newly created file
my-values.yamlwith your customizations, e.g., for s390x with debug and node selector:architecture: s390x kata-as-coco-runtime: env: debug: "true" shims: "qemu-coco-dev qemu-se" snapshotterHandlerMapping: "qemu-coco-dev:nydus,qemu-se:nydus" agentHttpsProxy: "http://proxy.example.com:8080" nodeSelector: node-role.kubernetes.io/worker: ""List of custom values examples can be found in the examples-custom-values.
Install chart using your custom values file:
helm install coco oci://ghcr.io/confidential-containers/charts/confidential-containers \
-f my-values.yaml \
--namespace coco-system \
--create-namespace
Multiple combined customization options
Customizations using --set flags can be combined with file based values using -f.
See below example which will provide s390x architecture, enable debug logging, and set a node selector for worker nodes.
helm install coco oci://ghcr.io/confidential-containers/charts/confidential-containers \
-f https://raw.githubusercontent.com/confidential-containers/charts/main/values/kata-s390x.yaml \
--set kata-as-coco-runtime.env.debug=true \
--set kata-as-coco-runtime.nodeSelector."node-role\.kubernetes\.io/worker"="" \
--set kata-as-coco-runtime.k8sDistribution=k3s \
--namespace coco-system \
--create-namespace
3 - Simple Workload
Creating a sample Confidential Containers workload
Once you’ve used the Helm charts to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the kata-qemu-coco-dev runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
In this example, we will be using the nginx image as described in the following yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
containers:
- name: nginx
image: nginx:1.29.4
dnsPolicy: ClusterFirst
runtimeClassName: kata-qemu-coco-dev
For the most basic workloads, setting the runtimeClassName and runtime-handler annotation is usually
the only requirement for the pod YAML.
Create a pod YAML file as previously described (we named it nginx.yaml) .
Create the workload:
kubectl apply -f nginx.yaml
Output:
pod/nginx created
Ensure the pod was created successfully (in running state):
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
4 - Securing Your Workload
Now that you’ve deployed a simple Confidential Containers workload, let’s explore how to secure it for production use. This page covers the key decisions you’ll need to make:
- Selecting the appropriate runtime class for your hardware
- Understanding and configuring policies to protect your workload
- Leveraging additional features for enhanced security
Selecting the Right Runtime Class
In the previous example, we used kata-qemu-coco-dev, which runs CoCo without hardware support for testing purposes. For production deployments, you need to select a runtime class that matches your actual TEE hardware.
Runtime Class Selection Guide
For Development and Testing:
kata-qemu-coco-dev- Testing without TEE hardware (⚠️ provides no security guarantees)
For Production on Bare Metal x86_64:
kata-qemu-tdx- Intel TDX (Trust Domain Extensions)kata-qemu-snp- AMD SEV-SNP (Secure Encrypted Virtualization)kata-qemu-sev- AMD SEV (older generation)kata-qemu-nvidia-gpu-tdx- Intel TDX with NVIDIA GPU supportkata-qemu-nvidia-gpu-snp- AMD SNP with NVIDIA GPU support
For Production on s390x:
kata-qemu-se- IBM Secure Execution
For Cloud Deployments (Peer Pods):
kata-remote- Cloud API Adaptor for AWS, Azure, GCP, etc.
Example: Moving to Production
Here’s how to update your pod to use actual TEE hardware:
For Intel TDX:
apiVersion: v1
kind: Pod
metadata:
name: nginx-production
spec:
runtimeClassName: kata-qemu-tdx
containers:
- image: bitnami/nginx:1.22.0
name: nginx
The runtimeClassName field is sufficient. Some examples also include the io.containerd.cri.runtime-handler annotation for compatibility with older configurations, but it’s redundant when using RuntimeClass.
For Intel TDX with an NVIDIA Hopper GPU:
apiVersion: v1
kind: Pod
metadata:
name: cuda-vectoradd-kata
namespace: default
annotations:
io.katacontainers.config.hypervisor.kernel_params: "nvrc.smi.srs=1"
spec:
runtimeClassName: kata-qemu-nvidia-gpu-tdx
restartPolicy: Never
containers:
- name: cuda-vectoradd
image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda12.5.0-ubuntu22.04"
resources:
limits:
nvidia.com/pgpu: "1"
memory: 16Gi
The runtime class you choose must match your hardware capabilities. Using a mismatched runtime class (e.g., kata-qemu-tdx on AMD hardware) will cause pod creation to fail.
Understanding CoCo Policies
Confidential Containers uses three types of policies to protect your workload at different layers. Understanding all three is crucial for securing production deployments.
The Three Policy Types
| Policy Type | Where Enforced | What It Controls | Configured Via |
|---|---|---|---|
| Agent Policy | Inside the TEE by Kata Agent | Which operations the agent can perform (create containers, exec into pods, etc.) | Pod annotation with init-data |
| Resource Policy | By Trustee KBS | Which secrets are released to which workloads | KBS Client or Trustee Operator |
| Attestation Policy | By Trustee AS | How hardware evidence is evaluated (what TCB is acceptable) | KBS Client or Trustee Operator |
1. Agent Policy (Inside the TEE)
The agent policy controls what operations the Kata agent can perform inside your TEE. This is your first line of defense against malicious or compromised Kubernetes control planes.
Example use cases:
- Prevent
kubectl execinto production pods - Restrict which container images can be launched
- Control which commands can be executed
Quick example of a restrictive agent policy:
package agent_policy
import rego.v1
default CreateContainerRequest := false
default ExecProcessRequest := false
# Only allow specific image digests
CreateContainerRequest if {
input.storages[0].source == "docker.io/library/nginx@sha256:abc123..."
}
Agent policies get embedded in the Init-Data configuration file. That file provides additional configuration like where to look for Trustee.
Learn more: Agent Policies and Init-Data
2. Resource Policy (At the KBS)
Resource policies control which secrets are released under what conditions. They inspect the attestation token from your workload to make decisions.
Example use cases:
- Verify the workload is using a specific agent policy (via Init-Data hash)
- Only release database credentials to attesting TDX guests
- Require specific trust levels (affirming vs contraindicated)
- Different secrets for different platforms (TDX vs SNP)
Example: Checking Init-Data hash
When you provide Init-Data in your pod (with an agent policy), the Attestation Service verifies it and includes the hash in the token. Your resource policy can verify the specific Init-Data hash to ensure the exact agent policy was used:
package policy
import rego.v1
default allow = false
# Only release secrets to workloads with the expected Init-Data hash
allow if {
input["submods"]["cpu0"]["ear.status"] == "affirming"
# Verify the specific Init-Data hash (includes agent policy + config)
input["submods"]["cpu0"]["ear.veraison.annotated-evidence"]["init_data"] == "expected-hash-here"
}
Use the hash algorithm you specified in the initdata.toml file to calculate
the expected value. For example with TDX you would have specified sha384 and at
a command line you could run:
sha384sum initdata.toml
Learn more: Resource Policies
3. Attestation Policy (At the Attestation Service)
Attestation policies define how hardware evidence is evaluated - what measurements are acceptable, which reference values to compare against, and how to calculate trust vectors.
Example use cases:
- Define acceptable firmware versions
- Specify required security levels for different workloads
- Map hardware measurements to trust claims
Learn more: Attestation Policies
CoCo ships with sensible default attestation policies for TDX and SNP. For most users, you only need to provide reference values - the policy is already configured appropriately.
Additional Security Features
Once you’ve configured the basics, explore these features for enhanced security: Features Overview
Quick Checklist for Production
Before deploying to production, ensure you’ve addressed:
- Selected the correct runtime class for your hardware
- Generated and embedded an agent policy appropriate for your workload
- Configured resource policies in your KBS
- Provisioned reference values to the attestation service
Next Steps
- Deploy Trustee: Trustee Installation to enable attestation
- Advanced Policies: Deep dive into all policy types
- Cloud Deployment: Cloud Examples for AWS, Azure, GCP