This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting Started

High level overview of Confidential Containers

This section will describe hardware and software prerequisites, installing Confidential Containers with an operator, verifying the installation, and running a pod with Confidential Containers.

1 - Prerequisites

Requirements for deploying Confidential Containers

This section will describe hardware and software prerequisites, installing Confidential Containers with an operator, verifying the installation, and running a pod with Confidential Containers.

1.1 - Hardware Requirements

Hardware requirements for deploying Confidential Containers

Confidential Computing is a hardware technology. Confidential Containers supports multiple hardware platforms and can leverage cloud hardware. If you do not have bare metal hardware and will deploy Confidential Containers with a cloud integration, continue to the cloud section.

You can also run Confidential Containers without hardware support for testing or development.

The Confidential Containers operator, which is described in the following section, does not setup the host kernel, firmware, or system configuration. Before installing Confidential Containers on a bare metal system, make sure that your node can start confidential VMs.

This section will describe the configuration that is required on the host.

Regardless of your platform, it is recommended to have at least 8GB of RAM and 4 cores on your worker node.

1.1.1 - CoCo without Hardware

Testing and development without hardware

For testing or development, Confidential Containers can be deployed without any hardware support.

This is referred to as a coco-dev or non-tee. A coco-dev deployment functions the same way as Confidential Containers with an enclave, but a non-confidential VM is used instead of a confidential VM. This does not provide any security guarantees, but it can be used for testing.

No additional host configuration is required as long as the host supports virtualization.

1.1.2 - Secure Execution Host Setup

Host configurations for IBM s390x

Platform Setup

This document outlines the steps to configure a host machine to support IBM Secure Execution on IBM Z & LinuxONE platforms. This capability enables enhanced security for workloads by taking advantage of protected virtualization. Ensure the host meets the necessary hardware and software requirements before proceeding.

Hardware Requirements

Supported hardware includes these systems:

  • IBM z15 or newer models
  • IBM LinuxONE III or newer models

Software Requirements

Additionally, the system must meet specific CPU and kernel configuration requirements. Follow the steps below to verify and enable the Secure Execution capability.

  1. Verify Protected Virtualization Support in the Kernel

    Run the following command to ensure the kernel supports protected virtualization:

    cat /sys/firmware/uv/prot_virt_host
    

    A value of 1 indicates support.

  2. Check Ultravisor Memory Reservation

    Confirm that the ultravisor has reserved memory during the current boot:

    sudo dmesg | grep -i ultravisor
    

    Example output:

    [    0.063630] prot_virt.f9efb6: Reserving 98MB as ultravisor base storage
    
  3. Validate the Secure Execution Facility Bit

    Ensure the required facility bit (158) is present:

    cat /proc/cpuinfo | grep 158
    

    The facilities field should include 158.

If any required configuration is missing, contact your cloud provider to enable the Secure Execution capability for a machine. Alternatively, if you have administrative privileges and the facility bit (158) is set, you can enable it by modifying kernel parameters and rebooting the system:

  1. Modify Kernel Parameters

    Update the kernel configuration to include the prot_virt=1 parameter:

    sudo sed -i 's/^\(parameters.*\)/\1 prot_virt=1/g' /etc/zipl.conf
    
  2. Update the Bootloader and reboot the System

    Apply the changes to the bootloader and reboot the system:

    sudo zipl -V
    sudo systemctl reboot
    
  3. Repeat the Verification Steps

    After rebooting, repeat the verification steps above to ensure Secure Execution is properly enabled.

Additional Notes

  • The steps to enable Secure Execution might vary depending on the Linux distributions. Consult your distribution’s documentation if necessary.
  • For more detailed information about IBM Secure Execution for Linux, see also the official documentation at IBM Secure Execution for Linux.

1.1.3 - SEV-SNP Host Setup

Host configurations for AMD SEV-SNP machines

Platform Setup

In order to launch SNP memory encrypted guests, the host must be prepared with a compatible kernel, 6.8.0-rc5-next-20240221-snp-host-cc2568386. AMD custom changes and required components and repositories will eventually be taken upstream.

Sev-utils is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, make sure to use the coco tagged version because they are already packaged and delivered with Kata.

Alternatively, refer to the AMDESE guide to manually build the host kernel and other components.

1.1.4 - SGX Host Setup

Host configurations for Intel SGX machines

TODO

1.1.5 - TDX Host Setup

Host configurations for Intel® Trust Domain Extensions (TDX)

Platform Setup

Additional Notes

1.2 - Cloud Hardware

Confidential Containers on the Cloud

Confidential Containers can be deployed via confidential computing cloud offerings. The main method of doing this is to use the cloud-api-adaptor also known as “peer pods.”

Some clouds also support starting confidential VMs inside of non-confidential VMs. With Confidential Containers these offerings can be used as if they were bare-metal.

1.3 - Cluster Setup

Cluster prerequisites

Confidential Containers requires Kubernetes. A cluster must be installed before running the operator. Many different clusters can be used but they should meet the following requirements.

  • The minimum Kubernetes version is 1.24
  • Cluster must use containerd or cri-o.
  • At least one node has the label node-role.kubernetes.io/worker=.
  • SELinux is not enabled.

If you use Minikube or Kind to setup your cluster, you will only be able to use runtime classes based on Cloud Hypervisor due to an issue with QEMU.

2 - Installation

Installing Confidential Containers with the operator

Deploy the operator

Deploy the operator by running the following command where <RELEASE_VERSION> needs to be substituted with the desired release tag.

kubectl apply -k github.com/confidential-containers/operator/config/release?ref=<RELEASE_VERSION>

For example, to deploy the v0.10.0 release run:

kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0

Wait until each pod has the STATUS of Running.

kubectl get pods -n confidential-containers-system --watch

Create the custom resource

Creating a custom resource installs the required CC runtime pieces into the cluster node and creates the runtime classes.

kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=<RELEASE_VERSION>
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=<RELEASE_VERSION>
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/hw?ref=<RELEASE_VERSION>

Wait until each pod has the STATUS of Running.

kubectl get pods -n confidential-containers-system --watch

Verify Installation

See if the expected runtime classes were created.

kubectl get runtimeclass

Should return

NAME                 HANDLER              AGE
kata                 kata-qemu            8d
kata-clh             kata-clh             8d
kata-qemu            kata-qemu            8d
kata-qemu-coco-dev   kata-qemu-coco-dev   8d
kata-qemu-sev        kata-qemu-sev        8d
kata-qemu-snp        kata-qemu-snp        8d
kata-qemu-tdx        kata-qemu-tdx        8d
NAME           HANDLER        AGE
kata           kata-qemu      60s
kata-qemu      kata-qemu      61s
kata-qemu-se   kata-qemu-se   61s
NAME            HANDLER         AGE
enclave-cc      enclave-cc      9m55s

Runtime Classes

CoCo supports many different runtime classes. Different deployment types install different sets of runtime classes. The operator may install some runtime classes that are not valid for your system. For example, if you run the operator on a TDX machine, you might have TDX and SEV runtime classes. Use the runtime classes that match your hardware.

Name Type Description
kata x86 Alias of the default runtime handler (usually the same as kata-qemu)
kata-clh x86 Kata Containers (non-confidential) using Cloud Hypervisor
kata-qemu x86 Kata Containers (non-confidential) using QEMU
kata-qemu-coco-dev x86 CoCo without an enclave (for testing only)
kata-qemu-sev x86 CoCo with QEMU for AMD SEV HW
kata-qemu-snp x86 CoCo with QEMU for AMD SNP HW
kata-qemu-tdx x86 CoCo with QEMU for Intel TDX HW
kata-qemu-se s390x CoCO with QEMU for Secure Execution
enclave-cc SGX CoCo with enclave-cc (process-based isolation without Kata)

3 - Simple Workload

Running a simple confidential workload

Creating a sample Confidential Containers workload

Once you’ve used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class. First, we will use the kata-qemu-coco-dev runtime class which uses CoCo without hardware support. Initially we will try this with an unencrypted container image.

In this example, we will be using the bitnami/nginx image as described in the following yaml:

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
  annotations:
    io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
  containers:
  - image: bitnami/nginx:1.22.0
    name: nginx
  dnsPolicy: ClusterFirst
  runtimeClassName: kata-qemu-coco-dev

Setting the runtimeClassName is usually the only change needed to the pod yaml, but some platforms support additional annotations for configuring the enclave. See the guides for more details.

With Confidential Containers, the workload container images are never downloaded on the host. For verifying that the container image doesn’t exist on the host, you should log into the k8s node and ensure the following command returns an empty result:

root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx

You will run this command again after the container has started.

Create a pod YAML file as previously described (we named it nginx.yaml) .

Create the workload:

kubectl apply -f nginx.yaml

Output:

pod/nginx created

Ensure the pod was created successfully (in running state):

kubectl get pods

Output:

NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          3m50s