This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Examples

Example CoCo Deployments

1 - AWS

Cloud API Adaptor (CAA) on AWS

This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on AWS Elastic Kubernetes Service (EKS). It explains how to deploy:

  • A single worker node Kubernetes cluster using Elastic Kubernetes Service (EKS)
  • CAA on that Kubernetes cluster
  • An Nginx pod backed by CAA pod VM

Pre-requisites

  • Install aws CLI tool
  • Install eksctl CLI tool
  • Install kubectl by following the instructions here.
  • Ensure that the tools curl, git and jq are installed.

AWS Preparation

  • Set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY (or AWS_PROFILE) and AWS_REGION for AWS CLI access

  • Set the region:

export AWS_REGION="us-east-2"

Note: We have chose region us-east-2 as it has AMD SEV-SNP instances as well as prebuilt pod VM images readily available.

export AWS_REGION="us-east-2"

Note: We have chose region us-east-2 because it has prebuilt pod VM images readily available.

Deploy Kubernetes using EKS

Make changes to the following environment variable as you see fit:

export CLUSTER_NAME="caa-$(date '+%Y%m%b%d%H%M%S')"
export CLUSTER_NODE_TYPE="m5.xlarge"
export CLUSTER_NODE_FAMILY_TYPE="Ubuntu2204"
export SSH_KEY=~/.ssh/id_rsa.pub

Example EKS cluster creation using the default AWS VPC-CNI

eksctl create cluster --name "$CLUSTER_NAME" \
    --node-type "$CLUSTER_NODE_TYPE" \
    --node-ami-family "$CLUSTER_NODE_FAMILY_TYPE" \
    --nodes 1 \
    --nodes-min 0 \
    --nodes-max 2 \
    --node-private-networking \
    --kubeconfig "$CLUSTER_NAME"-kubeconfig

Wait for the cluster to be created.

Allow required network ports

EKS_VPC_ID=$(aws eks describe-cluster --name "$CLUSTER_NAME" \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
echo $EKS_VPC_ID

EKS_CLUSTER_SG=$(aws eks describe-cluster --name "$CLUSTER_NAME" \
  --query "cluster.resourcesVpcConfig.clusterSecurityGroupId" \
  --output text)
echo $EKS_CLUSTER_SG

EKS_VPC_CIDR=$(aws ec2 describe-vpcs --vpc-ids "$EKS_VPC_ID" \
--query 'Vpcs[0].CidrBlock' --output text)
echo $EKS_VPC_CIDR

# agent-protocol-forwarder port
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol tcp --port 15150 --cidr "$EKS_VPC_CIDR"

# vxlan port
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol tcp --port 9000 --cidr "$EKS_VPC_CIDR"
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol udp --port 9000 --cidr "$EKS_VPC_CIDR"

Note:

  • Port 15150 is the default port for CAA to connect to the agent-protocol-forwarder running inside the pod VM.
  • Port 9000 is the VXLAN port used by CAA. Ensure it doesn’t conflict with the VXLAN port used by the Kubernetes CNI.

Deploy CAA

Download the CAA deployment artifacts

export CAA_VERSION="0.11.0"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
tar -xvzf "v${CAA_VERSION}.tar.gz"
cd "cloud-api-adaptor-${CAA_VERSION}/src/cloud-api-adaptor"
export CAA_BRANCH="main"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/heads/${CAA_BRANCH}.tar.gz"
tar -xvzf "${CAA_BRANCH}.tar.gz"
cd "cloud-api-adaptor-${CAA_BRANCH}/src/cloud-api-adaptor"

This assumes that you already have the code ready to use. On your terminal change directory to the Cloud API Adaptor’s code base.

CAA pod VM image

Export this environment variable to use for the peer pod VM:

export PODVM_AMI_ID="ami-0af256cec444be636"

There are no pre-built pod VM AMI for latest builds. You’ll need to follow these instructions to build the pod VM AMI. Once image build is finished then export image id to the environment variable PODVM_AMI_ID.

If you have made changes to the CAA code that affects the pod VM image and you want to deploy those changes then follow these instructions to build the pod VM AMI. Once image build is finished then export image id to the environment variable PODVM_AMI_ID.

CAA container image

Export the following environment variable to use the latest release image of CAA:

export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
export CAA_TAG="v${CAA_VERSION}-amd64"

Export the following environment variable to use the image built by the CAA CI on each merge to main:

export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"

Find an appropriate tag of pre-built image suitable to your needs here.

export CAA_TAG=""

Caution: You can also use the latest tag but it is not recommended, because of its lack of version control and potential for unpredictable updates, impacting stability and reproducibility in deployments.

If you have made changes to the CAA code and you want to deploy those changes then follow these instructions to build the container image. Once the image is built export the environment variables CAA_IMAGE and CAA_TAG.

Create the AWS credentials file

cat <<EOF > install/overlays/aws/aws-cred.env
AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
EOF

Note: The values should be without quotes

Select peer-pods machine type

export PODVM_INSTANCE_TYPE="m6a.large"
export DISABLECVM="false"

Find more AMD SEV-SNP machine types on this AWS documentation.

export PODVM_INSTANCE_TYPE="t3.large"
export DISABLECVM="true"

Populate the kustomization.yaml file

Run the following command to update the kustomization.yaml file:

cat <<EOF > install/overlays/azure/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../yamls
images:
- name: cloud-api-adaptor
  newName: "${CAA_IMAGE}"
  newTag: "${CAA_TAG}"
generatorOptions:
  disableNameSuffixHash: true
configMapGenerator:
- name: peer-pods-cm
  namespace: confidential-containers-system
  literals:
  - CLOUD_PROVIDER="aws"  
  - DISABLECVM="${DISABLECVM}"  
  - VXLAN_PORT="${VXLAN_PORT}"  
  - PODVM_AMI_ID="${PODVM_AMI_ID}"
  - PODVM_INSTANCE_TYPE="${PODVM_INSTANCE_TYPE}"  
secretGenerator:
- name: peer-pods-secret
  namespace: confidential-containers-system  
  envs:
    - aws-cred.env

Deploy CAA on the Kubernetes cluster

Label the cluster nodes with node.kubernetes.io/worker=

for NODE_NAME in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do
  kubectl label node $NODE_NAME node.kubernetes.io/worker=
done

Deploy the coco operator. Usually it’s the same version as CAA, but it can be adjusted.

export COCO_OPERATOR_VERSION="${CAA_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/release?ref=v${COCO_OPERATOR_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/samples/ccruntime/peer-pods?ref=v${COCO_OPERATOR_VERSION}"

Run the following command to deploy CAA:

kubectl apply -k "install/overlays/aws"

Generic CAA deployment instructions are also described here.

Run sample application

Ensure runtimeclass is present

Verify that the runtimeclass is created after deploying CAA:

kubectl get runtimeclass

Once you can find a runtimeclass named kata-remote then you can be sure that the deployment was successful. A successful deployment will look like this:

$ kubectl get runtimeclass
NAME          HANDLER       AGE
kata-remote   kata-remote   7m18s

Deploy workload

Create an nginx deployment:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      runtimeClassName: kata-remote
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        imagePullPolicy: Always
EOF

Ensure that the pod is up and running:

kubectl get pods -n default

You can verify that the peer pod VM was created by running the following command:

aws ec2 describe-instances --filters "Name=tag:Name,Values=podvm*" \
   --query 'Reservations[*].Instances[*].[InstanceId, Tags[?Key==`Name`].Value | [0]]' --output table

Here you should see the VM associated with the pod nginx.

Note: If you run into problems then check the troubleshooting guide here.

Cleanup

Delete all running pods using the runtimeclass kata-remote. You can use the following command for the same:

kubectl get pods -A -o custom-columns='NAME:.metadata.name,NAMESPACE:.metadata.namespace,RUNTIMECLASS:.spec.runtimeClassName' | grep kata-remote | awk '{print $1, $2}'

Verify that all peer-pod VMs are deleted. You can use the following command to list all the peer-pod VMs (VMs having prefix podvm) and status:

aws ec2 describe-instances --filters "Name=tag:Name,Values=podvm*" \
--query 'Reservations[*].Instances[*].[InstanceId, Tags[?Key==`Name`].Value | [0], State.Name]' --output table

Delete the EKS cluster by running the following command:

eksctl delete cluster --name=$EKS_CLUSTER_NAME 

2 - Azure

Cloud API Adaptor (CAA) on Azure

This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on Azure Kubernetes Service (AKS). It explains how to deploy:

  • A single worker node Kubernetes cluster using Azure Kubernetes Service (AKS)
  • CAA on that Kubernetes cluster
  • An Nginx pod backed by CAA pod VM

Pre-requisites

  • Install Azure CLI by following instructions here.
  • Install kubectl by following the instructions here.
  • Ensure that the tools curl, git, jq and sipcalc are installed.

Azure Preparation

Azure login

There are a bunch of steps that require you to be logged into your Azure account:

az login

Retrieve your subscription ID:

export AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv)

Set the region:

export AZURE_REGION="eastus"

Note: We selected the eastus region as it not only offers AMD SEV-SNP machines but also has prebuilt pod VM images readily available.

export AZURE_REGION="eastus2"

Note: We selected the eastus2 region as it not only offers Intel TDX machines but also has prebuilt pod VM images readily available.

export AZURE_REGION="eastus"

Note: We have chose region eastus because it has prebuilt pod VM images readily available.

Resource group

Note: Skip this step if you already have a resource group you want to use. Please, export the resource group name in the AZURE_RESOURCE_GROUP environment variable.

Create an Azure resource group by running the following command:

export AZURE_RESOURCE_GROUP="caa-rg-$(date '+%Y%m%b%d%H%M%S')"

az group create \
  --name "${AZURE_RESOURCE_GROUP}" \
  --location "${AZURE_REGION}"

Deploy Kubernetes using AKS

Make changes to the following environment variable as you see fit:

export CLUSTER_NAME="caa-$(date '+%Y%m%b%d%H%M%S')"
export AKS_WORKER_USER_NAME="azuser"
export AKS_RG="${AZURE_RESOURCE_GROUP}-aks"
export SSH_KEY=~/.ssh/id_rsa.pub

Note: Optionally, deploy the worker nodes into an existing Azure Virtual Network (VNet) and subnet by adding the following flag: --vnet-subnet-id $MY_SUBNET_ID.

Deploy AKS with single worker node to the same resource group you created earlier:

az aks create \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --node-resource-group "${AKS_RG}" \
  --name "${CLUSTER_NAME}" \
  --enable-oidc-issuer \
  --enable-workload-identity \
  --location "${AZURE_REGION}" \
  --node-count 1 \
  --node-vm-size Standard_F4s_v2 \
  --nodepool-labels node.kubernetes.io/worker= \
  --ssh-key-value "${SSH_KEY}" \
  --admin-username "${AKS_WORKER_USER_NAME}" \
  --os-sku Ubuntu

Download kubeconfig locally to access the cluster using kubectl:

az aks get-credentials \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --name "${CLUSTER_NAME}"

User assigned identity and federated credentials

CAA needs privileges to talk to Azure API. This privilege is granted to CAA by associating a workload identity to the CAA service account. This workload identity (a.k.a. user assigned identity) is given permissions to create VMs, fetch images and join networks in the next step.

Note: If you use an existing AKS cluster it might need to be configured to support workload identity and OpenID Connect (OIDC), please refer to the instructions in this guide.

Start by creating an identity for CAA:

export AZURE_WORKLOAD_IDENTITY_NAME="caa-${CLUSTER_NAME}"

az identity create \
  --name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --location "${AZURE_REGION}"
export USER_ASSIGNED_CLIENT_ID="$(az identity show \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
  --query 'clientId' \
  -otsv)"

Networking

The VMs that will host Pods will commonly require access to internet services, e.g. to pull images from a public OCI registry. A discrete subnet can be created next to the AKS cluster subnet in the same VNet. We then attach a NAT gateway with a public IP to that subnet:

export AZURE_VNET_NAME="$(az network vnet list -g ${AKS_RG} --query '[].name' -o tsv)"
export AKS_CIDR="$(az network vnet show -n $AZURE_VNET_NAME -g $AKS_RG --query "subnets[?name == 'aks-subnet'].addressPrefix" -o tsv)"
# 10.224.0.0/16
export MASK="${AKS_CIDR#*/}"
# 16
PEERPOD_CIDR="$(sipcalc $AKS_CIDR -n 2 | grep ^Network | grep -v current | cut -d' ' -f2)/${MASK}"
# 10.225.0.0/16
az network public-ip create -g "$AKS_RG" -n peerpod
az network nat gateway create -g "$AKS_RG" -l "$AZURE_REGION" --public-ip-addresses peerpod -n peerpod
az network vnet subnet create -g "$AKS_RG" --vnet-name "$AZURE_VNET_NAME" --nat-gateway peerpod --address-prefixes "$PEERPOD_CIDR" -n peerpod
export AZURE_SUBNET_ID="$(az network vnet subnet show -g "$AKS_RG" --vnet-name "$AZURE_VNET_NAME" -n peerpod --query id -o tsv)"

AKS resource group permissions

For CAA to be able to manage VMs assign the identity VM and Network contributor roles, privileges to spawn VMs in $AZURE_RESOURCE_GROUP and attach to a VNet in $AKS_RG.

az role assignment create \
  --role "Virtual Machine Contributor" \
  --assignee "$USER_ASSIGNED_CLIENT_ID" \
  --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
az role assignment create \
  --role "Reader" \
  --assignee "$USER_ASSIGNED_CLIENT_ID" \
  --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"
az role assignment create \
  --role "Network Contributor" \
  --assignee "$USER_ASSIGNED_CLIENT_ID" \
  --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AKS_RG}"

Create the federated credential for the CAA ServiceAccount using the OIDC endpoint from the AKS cluster:

export AKS_OIDC_ISSUER="$(az aks show \
  --name "${CLUSTER_NAME}" \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --query "oidcIssuerProfile.issuerUrl" \
  -otsv)"
az identity federated-credential create \
  --name "caa-${CLUSTER_NAME}" \
  --identity-name "${AZURE_WORKLOAD_IDENTITY_NAME}" \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --issuer "${AKS_OIDC_ISSUER}" \
  --subject system:serviceaccount:confidential-containers-system:cloud-api-adaptor \
  --audience api://AzureADTokenExchange

Deploy CAA

Note: If you are using Calico Container Network Interface (CNI) on the Kubernetes cluster, then, configure Virtual Extensible LAN (VXLAN) encapsulation for all inter workload traffic.

Download the CAA deployment artifacts

export CAA_VERSION="0.11.0"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
tar -xvzf "v${CAA_VERSION}.tar.gz"
cd "cloud-api-adaptor-${CAA_VERSION}/src/cloud-api-adaptor"
export CAA_BRANCH="main"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/heads/${CAA_BRANCH}.tar.gz"
tar -xvzf "${CAA_BRANCH}.tar.gz"
cd "cloud-api-adaptor-${CAA_BRANCH}/src/cloud-api-adaptor"

This assumes that you already have the code ready to use. On your terminal change directory to the Cloud API Adaptor’s code base.

CAA pod VM image

Export this environment variable to use for the peer pod VM:

export AZURE_IMAGE_ID="/CommunityGalleries/cococommunity-42d8482d-92cd-415b-b332-7648bd978eff/Images/peerpod-podvm-fedora/Versions/${CAA_VERSION}"

An automated job builds the pod VM image each night at 00:00 UTC. You can use that image by exporting the following environment variable:

SUCCESS_TIME=$(curl -s \
  -H "Accept: application/vnd.github+json" \
  "https://api.github.com/repos/confidential-containers/cloud-api-adaptor/actions/workflows/azure-podvm-image-nightly-build.yml/runs?status=success" \
  | jq -r '.workflow_runs[0].updated_at')

export AZURE_IMAGE_ID="/CommunityGalleries/cocopodvm-d0e4f35f-5530-4b9c-8596-112487cdea85/Images/podvm_image0/Versions/$(date -u -jf "%Y-%m-%dT%H:%M:%SZ" "$SUCCESS_TIME" "+%Y.%m.%d" 2>/dev/null || date -d "$SUCCESS_TIME" +%Y.%m.%d)"

Above image version is in the format YYYY.MM.DD, so to use the latest image should be today’s date or yesterday’s date.

If you have made changes to the CAA code that affects the pod VM image and you want to deploy those changes then follow these instructions to build the pod VM image. Once image build is finished then export image id to the environment variable AZURE_IMAGE_ID.

CAA container image

Export the following environment variable to use the latest release image of CAA:

export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
export CAA_TAG="v${CAA_VERSION}-amd64"

Export the following environment variable to use the image built by the CAA CI on each merge to main:

export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"

Find an appropriate tag of pre-built image suitable to your needs here.

export CAA_TAG=""

Caution: You can also use the latest tag but it is not recommended, because of its lack of version control and potential for unpredictable updates, impacting stability and reproducibility in deployments.

If you have made changes to the CAA code and you want to deploy those changes then follow these instructions to build the container image. Once the image is built export the environment variables CAA_IMAGE and CAA_TAG.

Annotate Service Account

Annotate the CAA Service Account with the workload identity’s CLIENT_ID and make the CAA DaemonSet use workload identity for authentication:

cat <<EOF > install/overlays/azure/workload-identity.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cloud-api-adaptor-daemonset
  namespace: confidential-containers-system
spec:
  template:
    metadata:
      labels:
        azure.workload.identity/use: "true"
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cloud-api-adaptor
  namespace: confidential-containers-system
  annotations:
    azure.workload.identity/client-id: "$USER_ASSIGNED_CLIENT_ID"
EOF

Select peer-pods machine type

export AZURE_INSTANCE_SIZE="Standard_DC2as_v5"
export DISABLECVM="false"

Find more AMD SEV-SNP machine types on this Azure documentation.

export AZURE_INSTANCE_SIZE="Standard_DC2es_v5"
export DISABLECVM="false"

Find more Intel TDX machine types on this Azure documentation.

export AZURE_INSTANCE_SIZE="Standard_D2as_v5"
export DISABLECVM="true"

Populate the kustomization.yaml file

Run the following command to update the kustomization.yaml file:

cat <<EOF > install/overlays/azure/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../yamls
images:
- name: cloud-api-adaptor
  newName: "${CAA_IMAGE}"
  newTag: "${CAA_TAG}"
generatorOptions:
  disableNameSuffixHash: true
configMapGenerator:
- name: peer-pods-cm
  namespace: confidential-containers-system
  literals:
  - CLOUD_PROVIDER="azure"
  - AZURE_SUBSCRIPTION_ID="${AZURE_SUBSCRIPTION_ID}"
  - AZURE_REGION="${AZURE_REGION}"
  - AZURE_INSTANCE_SIZE="${AZURE_INSTANCE_SIZE}"
  - AZURE_RESOURCE_GROUP="${AZURE_RESOURCE_GROUP}"
  - AZURE_SUBNET_ID="${AZURE_SUBNET_ID}"
  - AZURE_IMAGE_ID="${AZURE_IMAGE_ID}"
  - DISABLECVM="${DISABLECVM}"
secretGenerator:
- name: peer-pods-secret
  namespace: confidential-containers-system
- name: ssh-key-secret
  namespace: confidential-containers-system
  files:
  - id_rsa.pub
patchesStrategicMerge:
- workload-identity.yaml
EOF

The SSH public key should be accessible to the kustomization.yaml file:

cp $SSH_KEY install/overlays/azure/id_rsa.pub

Deploy CAA on the Kubernetes cluster

Deploy coco operator. Usually it’s the same version as CAA, but it can be adjusted.

export COCO_OPERATOR_VERSION="${CAA_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/release?ref=v${COCO_OPERATOR_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/samples/ccruntime/peer-pods?ref=v${COCO_OPERATOR_VERSION}"

Run the following command to deploy CAA:

kubectl apply -k "install/overlays/azure"

Generic CAA deployment instructions are also described here.

Run sample application

Ensure runtimeclass is present

Verify that the runtimeclass is created after deploying CAA:

kubectl get runtimeclass

Once you can find a runtimeclass named kata-remote then you can be sure that the deployment was successful. A successful deployment will look like this:

$ kubectl get runtimeclass
NAME          HANDLER       AGE
kata-remote   kata-remote   7m18s

Deploy workload

Create an nginx deployment:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      runtimeClassName: kata-remote
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        imagePullPolicy: Always
EOF

Ensure that the pod is up and running:

kubectl get pods -n default

You can verify that the peer pod VM was created by running the following command:

az vm list \
  --resource-group "${AZURE_RESOURCE_GROUP}" \
  --output table

Here you should see the VM associated with the pod nginx.

Note: If you run into problems then check the troubleshooting guide here.

Cleanup

If you wish to clean up the whole set up, you can delete the resource group by running the following command:

az group delete \
  --name "${AZURE_RESOURCE_GROUP}" \
  --yes --no-wait