This is the multi-page printable view of this section. Click here to print.
Examples
- 1: AWS
- 2: Azure
- 3: Container Launch with SNP Memory Encryption
- 4: GCP
1 - AWS
This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on AWS Elastic Kubernetes Service (EKS). It explains how to deploy:
- A single worker node Kubernetes cluster using Elastic Kubernetes Service (EKS)
- CAA on that Kubernetes cluster
- An Nginx pod backed by CAA pod VM
Pre-requisites
- Install
aws
CLI tool - Install
eksctl
CLI tool - Install kubectl by following the instructions here.
- Ensure that the tools
curl
,git
andjq
are installed.
AWS Preparation
-
Set
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
(orAWS_PROFILE
) andAWS_REGION
for AWS CLI access -
Set the region:
export AWS_REGION="us-east-2"
Note: We have chose region
us-east-2
as it has AMD SEV-SNP instances as well as prebuilt pod VM images readily available.
export AWS_REGION="us-east-2"
Note: We have chose region
us-east-2
because it has prebuilt pod VM images readily available.
Deploy Kubernetes using EKS
Make changes to the following environment variable as you see fit:
export CLUSTER_NAME="caa-$(date '+%Y%m%b%d%H%M%S')"
export CLUSTER_NODE_TYPE="m5.xlarge"
export CLUSTER_NODE_FAMILY_TYPE="Ubuntu2204"
export SSH_KEY=~/.ssh/id_rsa.pub
Example EKS cluster creation using the default AWS VPC-CNI
eksctl create cluster --name "$CLUSTER_NAME" \
--node-type "$CLUSTER_NODE_TYPE" \
--node-ami-family "$CLUSTER_NODE_FAMILY_TYPE" \
--nodes 1 \
--nodes-min 0 \
--nodes-max 2 \
--node-private-networking \
--kubeconfig "$CLUSTER_NAME"-kubeconfig
Wait for the cluster to be created.
Allow required network ports
EKS_VPC_ID=$(aws eks describe-cluster --name "$CLUSTER_NAME" \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
echo $EKS_VPC_ID
EKS_CLUSTER_SG=$(aws eks describe-cluster --name "$CLUSTER_NAME" \
--query "cluster.resourcesVpcConfig.clusterSecurityGroupId" \
--output text)
echo $EKS_CLUSTER_SG
EKS_VPC_CIDR=$(aws ec2 describe-vpcs --vpc-ids "$EKS_VPC_ID" \
--query 'Vpcs[0].CidrBlock' --output text)
echo $EKS_VPC_CIDR
# agent-protocol-forwarder port
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol tcp --port 15150 --cidr "$EKS_VPC_CIDR"
# vxlan port
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol tcp --port 9000 --cidr "$EKS_VPC_CIDR"
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol udp --port 9000 --cidr "$EKS_VPC_CIDR"
Note:
- Port
15150
is the default port for CAA to connect to theagent-protocol-forwarder
running inside the pod VM.- Port
9000
is the VXLAN port used by CAA. Ensure it doesn’t conflict with the VXLAN port used by the Kubernetes CNI.
Deploy CAA
Download the CAA deployment artifacts
export CAA_VERSION="0.12.0"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
tar -xvzf "v${CAA_VERSION}.tar.gz"
cd "cloud-api-adaptor-${CAA_VERSION}/src/cloud-api-adaptor"
export CAA_BRANCH="main"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/heads/${CAA_BRANCH}.tar.gz"
tar -xvzf "${CAA_BRANCH}.tar.gz"
cd "cloud-api-adaptor-${CAA_BRANCH}/src/cloud-api-adaptor"
This assumes that you already have the code ready to use. On your terminal change directory to the Cloud API Adaptor’s code base.
CAA pod VM image
We have a pre-built debug pod VM image available in us-east-2
for PoCs. You can find the AMI id for the release specific image
by running the following CLI:
export PODVM_AMI_ID=$(aws ec2 describe-images \
--filters "Name=tag:Name,Values=fedora-mkosi-debug-tee-amd" "Name=tag:Version,Values=${CAA_VERSION}" \
--query 'Images[*].[ImageId]' \
--output text)
echo $PODVM_AMI_ID
There are no pre-built pod VM AMI for latest builds. You’ll need to follow these instructions to build the pod VM AMI. Once image build is finished then export image id to the environment variable PODVM_AMI_ID
.
If you have made changes to the CAA code that affects the pod VM image and you want to deploy those changes then follow these instructions to build the pod VM AMI. Once image build is finished then export image id to the environment variable PODVM_AMI_ID
.
CAA container image
Export the following environment variable to use the latest release image of CAA:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
export CAA_TAG="v${CAA_VERSION}-amd64"
Export the following environment variable to use the image built by the CAA CI on each merge to main:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
Find an appropriate tag of pre-built image suitable to your needs here.
export CAA_TAG=""
Caution: You can also use the
latest
tag but it is not recommended, because of its lack of version control and potential for unpredictable updates, impacting stability and reproducibility in deployments.
If you have made changes to the CAA code and you want to deploy those changes then follow these instructions to build the container image. Once the image is built export the environment variables CAA_IMAGE
and CAA_TAG
.
Create the AWS credentials file
cat <<EOF > install/overlays/aws/aws-cred.env
AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
EOF
Note: The values should be without quotes
Select peer-pods machine type
export PODVM_INSTANCE_TYPE="m6a.large"
export DISABLECVM="false"
Find more AMD SEV-SNP machine types on this AWS documentation.
export PODVM_INSTANCE_TYPE="t3.large"
export DISABLECVM="true"
Populate the kustomization.yaml
file
Run the following command to update the kustomization.yaml
file:
cat <<EOF > install/overlays/azure/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../yamls
images:
- name: cloud-api-adaptor
newName: "${CAA_IMAGE}"
newTag: "${CAA_TAG}"
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: peer-pods-cm
namespace: confidential-containers-system
literals:
- CLOUD_PROVIDER="aws"
- DISABLECVM="${DISABLECVM}"
- VXLAN_PORT="${VXLAN_PORT}"
- PODVM_AMI_ID="${PODVM_AMI_ID}"
- PODVM_INSTANCE_TYPE="${PODVM_INSTANCE_TYPE}"
secretGenerator:
- name: peer-pods-secret
namespace: confidential-containers-system
envs:
- aws-cred.env
Deploy CAA on the Kubernetes cluster
Label the cluster nodes with node.kubernetes.io/worker=
for NODE_NAME in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do
kubectl label node $NODE_NAME node.kubernetes.io/worker=
done
Deploy the coco operator. Usually it’s the same version as CAA, but it can be adjusted.
export COCO_OPERATOR_VERSION="${CAA_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/release?ref=v${COCO_OPERATOR_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/samples/ccruntime/peer-pods?ref=v${COCO_OPERATOR_VERSION}"
Run the following command to deploy CAA:
kubectl apply -k "install/overlays/aws"
Generic CAA deployment instructions are also described here.
Run sample application
Ensure runtimeclass is present
Verify that the runtimeclass
is created after deploying CAA:
kubectl get runtimeclass
Once you can find a runtimeclass
named kata-remote
then you can be sure that the deployment was successful. A successful deployment will look like this:
$ kubectl get runtimeclass
NAME HANDLER AGE
kata-remote kata-remote 7m18s
Deploy workload
Create an nginx
deployment:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
runtimeClassName: kata-remote
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
imagePullPolicy: Always
EOF
Ensure that the pod is up and running:
kubectl get pods -n default
You can verify that the peer pod VM was created by running the following command:
aws ec2 describe-instances --filters "Name=tag:Name,Values=podvm*" \
--query 'Reservations[*].Instances[*].[InstanceId, Tags[?Key==`Name`].Value | [0]]' --output table
Here you should see the VM associated with the pod nginx
.
Note: If you run into problems then check the troubleshooting guide here.
Cleanup
Delete all running pods using the runtimeclass kata-remote
. You can use the following command for the same:
kubectl get pods -A -o custom-columns='NAME:.metadata.name,NAMESPACE:.metadata.namespace,RUNTIMECLASS:.spec.runtimeClassName' | grep kata-remote | awk '{print $1, $2}'
Verify that all peer-pod VMs are deleted. You can use the following command to list all the peer-pod VMs
(VMs having prefix podvm
) and status:
aws ec2 describe-instances --filters "Name=tag:Name,Values=podvm*" \
--query 'Reservations[*].Instances[*].[InstanceId, Tags[?Key==`Name`].Value | [0], State.Name]' --output table
Delete the EKS cluster by running the following command:
eksctl delete cluster --name=$EKS_CLUSTER_NAME
2 - Azure
This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on Azure Kubernetes Service (AKS). It explains how to deploy:
- A single worker node Kubernetes cluster using Azure Kubernetes Service (AKS)
- CAA on that Kubernetes cluster
- An Nginx pod backed by CAA pod VM
Confidential Containers also supports using Azure Key Vault as a resource backend for Trustee. More info
Pre-requisites
- Install Azure CLI by following instructions here.
- Install kubectl by following the instructions here.
- Ensure that the tools
curl
,git
,jq
andsipcalc
are installed.
Azure Preparation
Azure login
There are a bunch of steps that require you to be logged into your Azure account:
az login
Retrieve your subscription ID:
export AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv)
Set the region:
export AZURE_REGION="eastus"
Note: We selected the
eastus
region as it not only offers AMD SEV-SNP machines but also has prebuilt pod VM images readily available.
export AZURE_REGION="eastus2"
Note: We selected the
eastus2
region as it not only offers Intel TDX machines but also has prebuilt pod VM images readily available.
export AZURE_REGION="eastus"
Note: We have chose region
eastus
because it has prebuilt pod VM images readily available.
Resource group
Note: Skip this step if you already have a resource group you want to use. Please, export the resource group name in the
AZURE_RESOURCE_GROUP
environment variable.
Create an Azure resource group by running the following command:
export AZURE_RESOURCE_GROUP="caa-rg-$(date '+%Y%m%b%d%H%M%S')"
az group create \
--name "${AZURE_RESOURCE_GROUP}" \
--location "${AZURE_REGION}"
Deploy Kubernetes using AKS
Make changes to the following environment variable as you see fit:
export CLUSTER_NAME="caa-$(date '+%Y%m%b%d%H%M%S')"
export AKS_WORKER_USER_NAME="azuser"
export AKS_RG="${AZURE_RESOURCE_GROUP}-aks"
export SSH_KEY=~/.ssh/id_rsa.pub
Note: Optionally, deploy the worker nodes into an existing Azure Virtual Network (VNet) and subnet by adding the following flag:
--vnet-subnet-id $MY_SUBNET_ID
.
Deploy AKS with single worker node to the same resource group you created earlier:
az aks create \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--node-resource-group "${AKS_RG}" \
--name "${CLUSTER_NAME}" \
--enable-oidc-issuer \
--enable-workload-identity \
--location "${AZURE_REGION}" \
--node-count 1 \
--node-vm-size Standard_F4s_v2 \
--nodepool-labels node.kubernetes.io/worker= \
--ssh-key-value "${SSH_KEY}" \
--admin-username "${AKS_WORKER_USER_NAME}" \
--os-sku Ubuntu
Download kubeconfig locally to access the cluster using kubectl
:
az aks get-credentials \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--name "${CLUSTER_NAME}"
User assigned identity and federated credentials
CAA needs privileges to talk to Azure API. This privilege is granted to CAA by associating a workload identity to the CAA service account. This workload identity (a.k.a. user assigned identity) is given permissions to create VMs, fetch images and join networks in the next step.
Note: If you use an existing AKS cluster it might need to be configured to support workload identity and OpenID Connect (OIDC), please refer to the instructions in this guide.
Start by creating an identity for CAA:
export AZURE_WORKLOAD_IDENTITY_NAME="caa-${CLUSTER_NAME}"
az identity create \
--name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--location "${AZURE_REGION}"
export USER_ASSIGNED_CLIENT_ID="$(az identity show \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
--query 'clientId' \
-otsv)"
Networking
The VMs that will host Pods will commonly require access to internet services, e.g. to pull images from a public OCI registry. A discrete subnet can be created next to the AKS cluster subnet in the same VNet. We then attach a NAT gateway with a public IP to that subnet:
export AZURE_VNET_NAME="$(az network vnet list -g ${AKS_RG} --query '[].name' -o tsv)"
export AKS_CIDR="$(az network vnet show -n $AZURE_VNET_NAME -g $AKS_RG --query "subnets[?name == 'aks-subnet'].addressPrefix" -o tsv)"
# 10.224.0.0/16
export MASK="${AKS_CIDR#*/}"
# 16
PEERPOD_CIDR="$(sipcalc $AKS_CIDR -n 2 | grep ^Network | grep -v current | cut -d' ' -f2)/${MASK}"
# 10.225.0.0/16
az network public-ip create -g "$AKS_RG" -n peerpod
az network nat gateway create -g "$AKS_RG" -l "$AZURE_REGION" --public-ip-addresses peerpod -n peerpod
az network vnet subnet create -g "$AKS_RG" --vnet-name "$AZURE_VNET_NAME" --nat-gateway peerpod --address-prefixes "$PEERPOD_CIDR" -n peerpod
export AZURE_SUBNET_ID="$(az network vnet subnet show -g "$AKS_RG" --vnet-name "$AZURE_VNET_NAME" -n peerpod --query id -o tsv)"
AKS resource group permissions
For CAA to be able to manage VMs assign the identity VM and Network contributor roles, privileges to spawn VMs in $AZURE_RESOURCE_GROUP
and attach to a VNet in $AKS_RG
.
az role assignment create \
--role "Virtual Machine Contributor" \
--assignee "$USER_ASSIGNED_CLIENT_ID" \
--scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
az role assignment create \
--role "Reader" \
--assignee "$USER_ASSIGNED_CLIENT_ID" \
--scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
az role assignment create \
--role "Network Contributor" \
--assignee "$USER_ASSIGNED_CLIENT_ID" \
--scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AKS_RG}"
Create the federated credential for the CAA ServiceAccount using the OIDC endpoint from the AKS cluster:
export AKS_OIDC_ISSUER="$(az aks show \
--name "${CLUSTER_NAME}" \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--query "oidcIssuerProfile.issuerUrl" \
-otsv)"
az identity federated-credential create \
--name "caa-${CLUSTER_NAME}" \
--identity-name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--issuer "${AKS_OIDC_ISSUER}" \
--subject system:serviceaccount:confidential-containers-system:cloud-api-adaptor \
--audience api://AzureADTokenExchange
Deploy CAA
Note: If you are using Calico Container Network Interface (CNI) on the Kubernetes cluster, then, configure Virtual Extensible LAN (VXLAN) encapsulation for all inter workload traffic.
Download the CAA deployment artifacts
export CAA_VERSION="0.12.0"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
tar -xvzf "v${CAA_VERSION}.tar.gz"
cd "cloud-api-adaptor-${CAA_VERSION}/src/cloud-api-adaptor"
export CAA_BRANCH="main"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/heads/${CAA_BRANCH}.tar.gz"
tar -xvzf "${CAA_BRANCH}.tar.gz"
cd "cloud-api-adaptor-${CAA_BRANCH}/src/cloud-api-adaptor"
This assumes that you already have the code ready to use. On your terminal change directory to the Cloud API Adaptor’s code base.
CAA pod VM image
Export this environment variable to use for the peer pod VM:
export AZURE_IMAGE_ID="/CommunityGalleries/cococommunity-42d8482d-92cd-415b-b332-7648bd978eff/Images/peerpod-podvm-fedora/Versions/${CAA_VERSION}"
An automated job builds the pod VM image each night at 00:00 UTC. You can use that image by exporting the following environment variable:
SUCCESS_TIME=$(curl -s \
-H "Accept: application/vnd.github+json" \
"https://api.github.com/repos/confidential-containers/cloud-api-adaptor/actions/workflows/azure-nightly-build.yml/runs?status=success" \
| jq -r '.workflow_runs[0].updated_at')
export AZURE_IMAGE_ID="/CommunityGalleries/cocopodvm-d0e4f35f-5530-4b9c-8596-112487cdea85/Images/podvm_image0/Versions/$(date -u -jf "%Y-%m-%dT%H:%M:%SZ" "$SUCCESS_TIME" "+%Y.%m.%d" 2>/dev/null || date -d "$SUCCESS_TIME" +%Y.%m.%d)"
Above image version is in the format YYYY.MM.DD
, so to use the latest image should be today’s date or yesterday’s date.
If you have made changes to the CAA code that affects the pod VM image and you want to deploy those changes then follow these instructions to build the pod VM image. Once image build is finished then export image id to the environment variable AZURE_IMAGE_ID
.
CAA container image
Export the following environment variable to use the latest release image of CAA:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
export CAA_TAG="v${CAA_VERSION}-amd64"
Export the following environment variable to use the image built by the CAA CI on each merge to main:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
Find an appropriate tag of pre-built image suitable to your needs here.
export CAA_TAG=""
Caution: You can also use the
latest
tag but it is not recommended, because of its lack of version control and potential for unpredictable updates, impacting stability and reproducibility in deployments.
If you have made changes to the CAA code and you want to deploy those changes then follow these instructions to build the container image. Once the image is built export the environment variables CAA_IMAGE
and CAA_TAG
.
Annotate Service Account
Annotate the CAA Service Account with the workload identity’s CLIENT_ID
and make the CAA DaemonSet use workload identity for authentication:
cat <<EOF > install/overlays/azure/workload-identity.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cloud-api-adaptor-daemonset
namespace: confidential-containers-system
spec:
template:
metadata:
labels:
azure.workload.identity/use: "true"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-api-adaptor
namespace: confidential-containers-system
annotations:
azure.workload.identity/client-id: "$USER_ASSIGNED_CLIENT_ID"
EOF
Select peer-pods machine type
export AZURE_INSTANCE_SIZE="Standard_DC2as_v5"
export DISABLECVM="false"
Find more AMD SEV-SNP machine types on this Azure documentation.
export AZURE_INSTANCE_SIZE="Standard_DC2es_v5"
export DISABLECVM="false"
Find more Intel TDX machine types on this Azure documentation.
export AZURE_INSTANCE_SIZE="Standard_D2as_v5"
export DISABLECVM="true"
Populate the kustomization.yaml
file
Run the following command to update the kustomization.yaml
file:
cat <<EOF > install/overlays/azure/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../yamls
images:
- name: cloud-api-adaptor
newName: "${CAA_IMAGE}"
newTag: "${CAA_TAG}"
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: peer-pods-cm
namespace: confidential-containers-system
literals:
- CLOUD_PROVIDER="azure"
- AZURE_SUBSCRIPTION_ID="${AZURE_SUBSCRIPTION_ID}"
- AZURE_REGION="${AZURE_REGION}"
- AZURE_INSTANCE_SIZE="${AZURE_INSTANCE_SIZE}"
- AZURE_RESOURCE_GROUP="${AZURE_RESOURCE_GROUP}"
- AZURE_SUBNET_ID="${AZURE_SUBNET_ID}"
- AZURE_IMAGE_ID="${AZURE_IMAGE_ID}"
- DISABLECVM="${DISABLECVM}"
secretGenerator:
- name: peer-pods-secret
namespace: confidential-containers-system
- name: ssh-key-secret
namespace: confidential-containers-system
files:
- id_rsa.pub
patchesStrategicMerge:
- workload-identity.yaml
EOF
The SSH public key should be accessible to the kustomization.yaml
file:
cp $SSH_KEY install/overlays/azure/id_rsa.pub
Deploy CAA on the Kubernetes cluster
Deploy coco operator. Usually it’s the same version as CAA, but it can be adjusted.
export COCO_OPERATOR_VERSION="${CAA_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/release?ref=v${COCO_OPERATOR_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/samples/ccruntime/peer-pods?ref=v${COCO_OPERATOR_VERSION}"
Run the following command to deploy CAA:
kubectl apply -k "install/overlays/azure"
Generic CAA deployment instructions are also described here.
Run sample application
Ensure runtimeclass is present
Verify that the runtimeclass
is created after deploying CAA:
kubectl get runtimeclass
Once you can find a runtimeclass
named kata-remote
then you can be sure that the deployment was successful. A successful deployment will look like this:
$ kubectl get runtimeclass
NAME HANDLER AGE
kata-remote kata-remote 7m18s
Deploy workload
Create an nginx
deployment:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
runtimeClassName: kata-remote
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
imagePullPolicy: Always
EOF
Ensure that the pod is up and running:
kubectl get pods -n default
You can verify that the peer pod VM was created by running the following command:
az vm list \
--resource-group "${AZURE_RESOURCE_GROUP}" \
--output table
Here you should see the VM associated with the pod nginx
.
Note: If you run into problems then check the troubleshooting guide here.
Cleanup
If you wish to clean up the whole set up, you can delete the resource group by running the following command:
az group delete \
--name "${AZURE_RESOURCE_GROUP}" \
--yes --no-wait
3 - Container Launch with SNP Memory Encryption
Container Launch With Memory Encryption
Launch a Confidential Service
To launch a container with SNP memory encryption, the SNP runtime class (kata-qemu-snp
) must be specified. A base alpine docker container (Dockerfile) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a SSH public key for validation purposes.
Here is a sample service yaml specifying the SNP runtime class:
kind: Service
apiVersion: v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
app: "confidential-unencrypted"
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
matchLabels:
app: "confidential-unencrypted"
template:
metadata:
labels:
app: "confidential-unencrypted"
annotations:
io.containerd.cri.runtime-handler: kata-qemu-snp
spec:
runtimeClassName: kata-qemu-snp
containers:
- name: "confidential-unencrypted"
image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always
Save the contents of this yaml to a file called confidential-unencrypted.yaml
.
Start the service:
kubectl apply -f confidential-unencrypted.yaml
Check for errors:
kubectl describe pod confidential-unencrypted
If there are no errors in the Events section, then the container has been successfully created with SNP memory encryption.
Validate SNP Memory Encryption
The container dmesg
log can be parsed to indicate that SNP memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get the pod IP:
pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
Download and save the SSH private key and set the permissions.
wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
chmod 600 confidential-image-ssh-key
The following command will run a remote SSH command on the container to check if SNP memory encryption is active:
ssh -i confidential-image-ssh-key \
-o "StrictHostKeyChecking no" \
-t root@${pod_ip} \
'dmesg | grep "Memory Encryption Features""'
If SNP is enabled and active, the output should return:
[ 0.150045] Memory Encryption Features active: AMD SNP
4 - GCP
This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on Google Kubernetes Engine (GKE). It explains how to deploy:
- A single worker node Kubernetes cluster using GKE
- CAA on that Kubernetes cluster
- A sample application backed by a CAA pod VM
Pre-requisites
- Install Required Tools:
- Google Cloud Project:
- Ensure you have a Google Cloud project created.
- Note the Project ID (export it as
GCP_PROJECT_ID
).
GCP Preparation
Start by authenticating with Google and choosing your project:
export GCP_PROJECT_ID="YOUR_PROJECT_ID"
gcloud auth login
gcloud config set project ${GCP_PROJECT_ID}
Enable the necessary API:
gcloud services enable container.googleapis.com --project=${GCP_PROJECT_ID}
Create a service account with the necessary permissions:
gcloud iam service-accounts create peerpods \
--description="Peerpods Service Account" \
--display-name="Peerpods Service Account"
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member="serviceAccount:peerpods@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/compute.instanceAdmin.v1"
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member="serviceAccount:peerpods@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
Generate and save the credentials file:
gcloud iam service-accounts keys create \
~/.config/gcloud/peerpods_application_key.json \
--iam-account=peerpods@${GCP_PROJECT_ID}.iam.gserviceaccount.com
export GOOGLE_APPLICATION_CREDENTIALS=~/.config/gcloud/peerpods_application_key.json
Configure additional environment variables that will be used later.
Set the region:
export GCP_REGION="us-central1"
Note
“us-central1” was chosen because supports Confidential VMs. For a complete list of supported regions visit https://cloud.google.com/confidential-computing/confidential-vm/docs/supported-configurations#supported-zonesSet the PodVM instance type:
export PODVM_INSTANCE_TYPE="n2d-standard-4"
export DISABLECVM=false
export PODVM_INSTANCE_TYPE="e2-medium"
export DISABLECVM=true
Deploy Kubernetes Using GKE
Deploy a single node Kubernetes cluster using GKE:
gcloud container clusters create my-cluster \
--zone ${GCP_REGION}-a \
--machine-type "e2-standard-4" \
--image-type UBUNTU_CONTAINERD \
--num-nodes 1
Label the worker nodes:
kubectl get nodes --selector='!node-role.kubernetes.io/master' -o name | \
xargs -I{} kubectl label {} node.kubernetes.io/worker=
Note
Starting with GKE version 1.27, GCP configures containerd with thediscard_unpacked_layers=true
flag to optimize disk usage by removing
compressed image layers after they are unpacked. However, this can cause
issues with PeerPods, as the workload may fail to locate required layers. To
avoid this, disable the discard_unpacked_layers
setting in the containerd
configuration.
Configure VPC network
We need to make sure port 15150 is open under the default VPC network:
gcloud compute firewall-rules create allow-port-15150 \
--project=${GCP_PROJECT_ID} \
--network=default \
--allow=tcp:15150
For production scenarios, it is advisable to restrict the source IP range to minimize security risks. For example, you can restrict the source range to a specific IP address or CIDR block:
gcloud compute firewall-rules create allow-port-15150-restricted \
--project=${PROJECT_ID} \
--network=default \
--allow=tcp:15150 \
--source-ranges=[YOUR_EXTERNAL_IP]
Deploy the CoCo Operator with PeerPods Runtime
Deploy the CoCo operator. Usually it’s the same version as CAA, but it can be adjusted.
export CAA_VERSION="0.11.0"
export COCO_OPERATOR_VERSION="${CAA_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/release?ref=v${COCO_OPERATOR_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/samples/ccruntime/peer-pods?ref=v${COCO_OPERATOR_VERSION}"
Deploy the Cloud API Adaptor (CAA)
Download the CAA Deployment Artifacts
export CAA_VERSION="0.11.0"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
tar -xvzf "v${CAA_VERSION}.tar.gz"
cd "cloud-api-adaptor-${CAA_VERSION}/src/cloud-api-adaptor"
export CAA_BRANCH="main"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/heads/${CAA_BRANCH}.tar.gz"
tar -xvzf "${CAA_BRANCH}.tar.gz"
cd "cloud-api-adaptor-${CAA_BRANCH}/src/cloud-api-adaptor"
This assumes that you already have the code ready to use. On your terminal change directory to the Cloud API Adaptor’s code base.
Configure the CAA PodVM image
Export this environment variable to use for the PodVM:
export PODVM_IMAGE_ID="/projects/it-cloud-gcp-prod-osc-devel/global/images/fedora-mkosi-tee-amd-1-11-0"
There are no pre-built PodVM image for latest builds. You’ll need to follow
these
instructions
to build the PodVM image. Once image build is finished then export image id to
the environment variable PODVM_IMAGE_ID
.
If you have made changes to the CAA code that affects the pod VM image and you
want to deploy those changes then follow these
instructions
to build the PodVM image. Once image build is finished then export image id to
the environment variable PODVM_IMAGE_ID
.
Configure the CAA container image
Export the following environment variable to use the latest release image of CAA:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
export CAA_TAG="v${CAA_VERSION}-amd64"
Export the following environment variable to use the image built by the CAA CI on each merge to main:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
Find an appropriate tag of pre-built image suitable to your needs here.
export CAA_TAG=""
Caution: You can also use the
latest
tag but it is not recommended, because of its lack of version control and potential for unpredictable updates, impacting stability and reproducibility in deployments.
If you have made changes to the CAA code and you want to deploy those changes
then follow these
instructions
to build the container image. Once the image is built export the environment
variables CAA_IMAGE
and CAA_TAG
.
Create the GCP credentials file
Copy the Application Credentials to the GCP overlay folder:
cp $GOOGLE_APPLICATION_CREDENTIALS install/overlays/gcp/GCP_CREDENTIALS
Populate the kustomization.yaml
file
Run the following command to update the
kustomization.yaml
file:
cat <<EOF > install/overlays/gcp/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../yamls
images:
- name: cloud-api-adaptor
newName: "${CAA_IMAGE}"
newTag: "${CAA_TAG}"
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: peer-pods-cm
namespace: confidential-containers-system
literals:
- CLOUD_PROVIDER="gcp"
- PODVM_IMAGE_NAME="${PODVM_IMAGE_ID}"
- GCP_PROJECT_ID="${GCP_PROJECT_ID}"
- GCP_ZONE="${GCP_REGION}-a"
- GCP_MACHINE_TYPE="${PODVM_INSTANCE_TYPE}"
- DISABLECVM="${DISABLECVM}"
- GCP_NETWORK="global/networks/default"
secretGenerator:
- name: peer-pods-secret
namespace: confidential-containers-system
files:
- GCP_CREDENTIALS
EOF
Deploy CAA on the Kubernetes cluster
Run the following command to deploy CAA:
kubectl apply -k "install/overlays/gcp"
Verify the deployment:
kubectl get pods -n confidential-containers-system
Verify that the runtimeclass
is created after deploying CAA:
kubectl get runtimeclass
Once you can find a runtimeclass
named kata-remote
then you can be sure
that the deployment was successful. A successful deployment will look like
this:
$ kubectl get runtimeclass
NAME HANDLER AGE
kata-remote kata-remote 7m18s
Generic CAA deployment instructions are also described here.
Run a sample application
This example showcases a more advanced deployment using TEE and confidential VMs with the kata-remote runtime class. It demonstrates how to deploy a sample pod and retrieve a secret securely within a confidential computing environment.
Prepare the init data configuration
PeerPods now supports init data, you can pass the required configuration files
(aa.toml
, cdh.toml
, and policy.rego
) via the
io.katacontainers.config.runtime.cc_init_data
annotation. Below is an example
of the configuration and usage.
# initdata.toml
algorithm = "sha384"
version = "0.1.0"
[data]
"aa.toml" = '''
[token_configs]
[token_configs.coco_as]
url = 'http://127.0.0.1:8080'
[token_configs.kbs]
url = 'http://127.0.0.1:8080'
cert = """
-----BEGIN CERTIFICATE-----
MIIDljCCAn6gAwIBAgIUR/UNh13GFam4emgludtype/S9BIwDQYJKoZIhvcNAQEL
BQAwdTELMAkGA1UEBhMCQ04xETAPBgNVBAgMCFpoZWppYW5nMREwDwYDVQQHDAhI
YW5nemhvdTERMA8GA1UECgwIQUFTLVRFU1QxFDASBgNVBAsMC0RldmVsb3BtZW50
MRcwFQYDVQQDDA5BQVMtVEVTVC1IVFRQUzAeFw0yNDAzMTgwNzAzNTNaFw0yNTAz
MTgwNzAzNTNaMHUxCzAJBgNVBAYTAkNOMREwDwYDVQQIDAhaaGVqaWFuZzERMA8G
A1UEBwwISGFuZ3pob3UxETAPBgNVBAoMCEFBUy1URVNUMRQwEgYDVQQLDAtEZXZl
bG9wbWVudDEXMBUGA1UEAwwOQUFTLVRFU1QtSFRUUFMwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQDfp1aBr6LiNRBlJUcDGcAbcUCPG6UzywtVIc8+comS
ay//gwz2AkDmFVvqwI4bdp/NUCwSC6ShHzxsrCEiagRKtA3af/ckM7hOkb4S6u/5
ewHHFcL6YOUp+NOH5/dSLrFHLjet0dt4LkyNBPe7mKAyCJXfiX3wb25wIBB0Tfa0
p5VoKzwWeDQBx7aX8TKbG6/FZIiOXGZdl24DGARiqE3XifX7DH9iVZ2V2RL9+3WY
05GETNFPKtcrNwTy8St8/HsWVxjAzGFzf75Lbys9Ff3JMDsg9zQzgcJJzYWisxlY
g3CmnbENP0eoHS4WjQlTUyY0mtnOwodo4Vdf8ZOkU4wJAgMBAAGjHjAcMBoGA1Ud
EQQTMBGCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOCAQEAKW32spii
t2JB7C1IvYpJw5mQ5bhIlldE0iB5rwWvNbuDgPrgfTI4xiX5sumdHw+P2+GU9KXF
nWkFRZ9W/26xFrVgGIS/a07aI7xrlp0Oj+1uO91UhCL3HhME/0tPC6z1iaFeZp8Y
T1tLnafqiGiThFUgvg6PKt86enX60vGaTY7sslRlgbDr9sAi/NDSS7U1PviuC6yo
yJi7BDiRSx7KrMGLscQ+AKKo2RF1MLzlJMa1kIZfvKDBXFzRd61K5IjDRQ4HQhwX
DYEbQvoZIkUTc1gBUWDcAUS5ztbJg9LCb9WVtvUTqTP2lGuNymOvdsuXq+sAZh9b
M9QaC1mzQ/OStg==
-----END CERTIFICATE-----
"""
'''
"cdh.toml" = '''
socket = 'unix:///run/confidential-containers/cdh.sock'
credentials = []
[kbc]
name = 'cc_kbc'
url = 'http://1.2.3.4:8080'
kbs_cert = """
-----BEGIN CERTIFICATE-----
MIIFTDCCAvugAwIBAgIBADBGBgkqhkiG9w0BAQowOaAPMA0GCWCGSAFlAwQCAgUA
oRwwGgYJKoZIhvcNAQEIMA0GCWCGSAFlAwQCAgUAogMCATCjAwIBATB7MRQwEgYD
VQQLDAtFbmdpbmVlcmluZzELMAkGA1UEBhMCVVMxFDASBgNVBAcMC1NhbnRhIENs
YXJhMQswCQYDVQQIDAJDQTEfMB0GA1UECgwWQWR2YW5jZWQgTWljcm8gRGV2aWNl
czESMBAGA1UEAwwJU0VWLU1pbGFuMB4XDTIzMDEyNDE3NTgyNloXDTMwMDEyNDE3
NTgyNlowejEUMBIGA1UECwwLRW5naW5lZXJpbmcxCzAJBgNVBAYTAlVTMRQwEgYD
VQQHDAtTYW50YSBDbGFyYTELMAkGA1UECAwCQ0ExHzAdBgNVBAoMFkFkdmFuY2Vk
IE1pY3JvIERldmljZXMxETAPBgNVBAMMCFNFVi1WQ0VLMHYwEAYHKoZIzj0CAQYF
K4EEACIDYgAExmG1ZbuoAQK93USRyZQcsyobfbaAEoKEELf/jK39cOVJt1t4s83W
XM3rqIbS7qHUHQw/FGyOvdaEUs5+wwxpCWfDnmJMAQ+ctgZqgDEKh1NqlOuuKcKq
2YAWE5cTH7sHo4IBFjCCARIwEAYJKwYBBAGceAEBBAMCAQAwFwYJKwYBBAGceAEC
BAoWCE1pbGFuLUIwMBEGCisGAQQBnHgBAwEEAwIBAzARBgorBgEEAZx4AQMCBAMC
AQAwEQYKKwYBBAGceAEDBAQDAgEAMBEGCisGAQQBnHgBAwUEAwIBADARBgorBgEE
AZx4AQMGBAMCAQAwEQYKKwYBBAGceAEDBwQDAgEAMBEGCisGAQQBnHgBAwMEAwIB
CDARBgorBgEEAZx4AQMIBAMCAXMwTQYJKwYBBAGceAEEBEDDhCejDUx6+dlvehW5
cmmCWmTLdqI1L/1dGBFdia1HP46MC82aXZKGYSutSq37RCYgWjueT+qCMBE1oXDk
d1JOMEYGCSqGSIb3DQEBCjA5oA8wDQYJYIZIAWUDBAICBQChHDAaBgkqhkiG9w0B
AQgwDQYJYIZIAWUDBAICBQCiAwIBMKMDAgEBA4ICAQACgCai9x8DAWzX/2IelNWm
ituEBSiq9C9eDnBEckQYikAhPasfagnoWFAtKu/ZWTKHi+BMbhKwswBS8W0G1ywi
cUWGlzigI4tdxxf1YBJyCoTSNssSbKmIh5jemBfrvIBo1yEd+e56ZJMdhN8e+xWU
bvovUC2/7Dl76fzAaACLSorZUv5XPJwKXwEOHo7FIcREjoZn+fKjJTnmdXce0LD6
9RHr+r+ceyE79gmK31bI9DYiJoL4LeGdXZ3gMOVDR1OnDos5lOBcV+quJ6JujpgH
d9g3Sa7Du7pusD9Fdap98ocZslRfFjFi//2YdVM4MKbq6IwpYNB+2PCEKNC7SfbO
NgZYJuPZnM/wViES/cP7MZNJ1KUKBI9yh6TmlSsZZOclGJvrOsBZimTXpATjdNMt
cluKwqAUUzYQmU7bf2TMdOXyA9iH5wIpj1kWGE1VuFADTKILkTc6LzLzOWCofLxf
onhTtSDtzIv/uel547GZqq+rVRvmIieEuEvDETwuookfV6qu3D/9KuSr9xiznmEg
xynud/f525jppJMcD/ofbQxUZuGKvb3f3zy+aLxqidoX7gca2Xd9jyUy5Y/83+ZN
bz4PZx81UJzXVI9ABEh8/xilATh1ZxOePTBJjN7lgr0lXtKYjV/43yyxgUYrXNZS
oLSG2dLCK9mjjraPjau34Q==
-----END CERTIFICATE-----
"""
'''
"policy.rego" = '''
package agent_policy
import future.keywords.in
import future.keywords.every
import input
# Default values, returned by OPA when rules cannot be evaluated to true.
default CopyFileRequest := true
default CreateContainerRequest := true
default CreateSandboxRequest := true
default DestroySandboxRequest := true
default ExecProcessRequest := false
default GetOOMEventRequest := true
default GuestDetailsRequest := true
default OnlineCPUMemRequest := true
default PullImageRequest := true
default ReadStreamRequest := false
default RemoveContainerRequest := true
default RemoveStaleVirtiofsShareMountsRequest := true
default SignalProcessRequest := true
default StartContainerRequest := true
default StatsContainerRequest := true
default TtyWinResizeRequest := true
default UpdateEphemeralMountsRequest := true
default UpdateInterfaceRequest := true
default UpdateRoutesRequest := true
default WaitProcessRequest := true
default WriteStreamRequest := false
'''
Make sure you have the right policy and KBC URL is pointing to your Key Broker Service.
Now, encode the initdata.toml
and store it in a variable
INITDATA=$(base64 -w0 initdata.toml)
Deploy the pod with:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: example-pod
annotations:
io.katacontainers.config.runtime.cc_init_data: "$INITDATA"
spec:
runtimeClassName: kata-remote
containers:
- name: example-container
image: alpine:latest
command:
- sleep
- "3600"
securityContext:
privileged: false
seccompProfile:
type: RuntimeDefault
EOF
Fetching Secrets from Trustee
Once the pod is successfully deployed with the initdata
, you can retrieve
secrets from the Trustee service running inside the pod. Use the following
command to fetch a specific secret:
kubectl exec -it example-pod -- curl http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1
This example demonstrates how to verify if CAA is successfully starting the PodVM within the cloud provider. It is the simplest example available for deployment.
Create an nginx
deployment:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
runtimeClassName: kata-remote
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
imagePullPolicy: Always
EOF
Ensure that the pod is up and running:
kubectl get pods -n default
You can verify that the PodVM was created by running the following command:
gcloud compute instances list
Here you should see the VM associated with the pod used by the example above.
Note: If you run into problems then check the troubleshooting guide here.
Cleanup
Delete all running pods using the runtimeclass kata-remote
. You can use the
following command for the same:
kubectl get pods -A -o custom-columns='NAME:.metadata.name,NAMESPACE:.metadata.namespace,RUNTIMECLASS:.spec.runtimeClassName' | grep kata-remote | awk '{print $1, $2}'
Verify that all peer-pod VMs are deleted. You can use the following command to
list all the peer-pod VMs (VMs having prefix podvm
) and status:
gcloud compute instances list \
--filter="name~'podvm.*'" \
--format="table(name,zone,status)"
Delete the GKE cluster by running the following command:
gcloud container clusters delete my-cluster --zone ${GCP_REGION}-a