AWS
Categories:
This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on AWS Elastic Kubernetes Service (EKS). It explains how to deploy:
- A single worker node Kubernetes cluster using Elastic Kubernetes Service (EKS)
- CAA on that Kubernetes cluster
- An Nginx pod backed by CAA pod VM
Pre-requisites
- Install
aws
CLI tool - Install
eksctl
CLI tool - Install kubectl by following the instructions here.
- Ensure that the tools
curl
,git
andjq
are installed.
AWS Preparation
-
Set
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
(orAWS_PROFILE
) andAWS_REGION
for AWS CLI access -
Set the region:
export AWS_REGION="us-east-2"
Note: We have chose region
us-east-2
as it has AMD SEV-SNP instances as well as prebuilt pod VM images readily available.
export AWS_REGION="us-east-2"
Note: We have chose region
us-east-2
because it has prebuilt pod VM images readily available.
Deploy Kubernetes using EKS
Make changes to the following environment variable as you see fit:
export CLUSTER_NAME="caa-$(date '+%Y%m%b%d%H%M%S')"
export CLUSTER_NODE_TYPE="m5.xlarge"
export CLUSTER_NODE_FAMILY_TYPE="Ubuntu2204"
export SSH_KEY=~/.ssh/id_rsa.pub
Example EKS cluster creation using the default AWS VPC-CNI
eksctl create cluster --name "$CLUSTER_NAME" \
--node-type "$CLUSTER_NODE_TYPE" \
--node-ami-family "$CLUSTER_NODE_FAMILY_TYPE" \
--nodes 1 \
--nodes-min 0 \
--nodes-max 2 \
--node-private-networking \
--kubeconfig "$CLUSTER_NAME"-kubeconfig
Wait for the cluster to be created.
Allow required network ports
EKS_VPC_ID=$(aws eks describe-cluster --name "$CLUSTER_NAME" \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
echo $EKS_VPC_ID
EKS_CLUSTER_SG=$(aws eks describe-cluster --name "$CLUSTER_NAME" \
--query "cluster.resourcesVpcConfig.clusterSecurityGroupId" \
--output text)
echo $EKS_CLUSTER_SG
EKS_VPC_CIDR=$(aws ec2 describe-vpcs --vpc-ids "$EKS_VPC_ID" \
--query 'Vpcs[0].CidrBlock' --output text)
echo $EKS_VPC_CIDR
# agent-protocol-forwarder port
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol tcp --port 15150 --cidr "$EKS_VPC_CIDR"
# vxlan port
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol tcp --port 9000 --cidr "$EKS_VPC_CIDR"
aws ec2 authorize-security-group-ingress --group-id "$EKS_CLUSTER_SG" --protocol udp --port 9000 --cidr "$EKS_VPC_CIDR"
Note:
- Port
15150
is the default port for CAA to connect to theagent-protocol-forwarder
running inside the pod VM.- Port
9000
is the VXLAN port used by CAA. Ensure it doesn’t conflict with the VXLAN port used by the Kubernetes CNI.
Deploy CAA
Download the CAA deployment artifacts
export CAA_VERSION="0.11.0"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
tar -xvzf "v${CAA_VERSION}.tar.gz"
cd "cloud-api-adaptor-${CAA_VERSION}/src/cloud-api-adaptor"
export CAA_BRANCH="main"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/heads/${CAA_BRANCH}.tar.gz"
tar -xvzf "${CAA_BRANCH}.tar.gz"
cd "cloud-api-adaptor-${CAA_BRANCH}/src/cloud-api-adaptor"
This assumes that you already have the code ready to use. On your terminal change directory to the Cloud API Adaptor’s code base.
CAA pod VM image
Export this environment variable to use for the peer pod VM:
export PODVM_AMI_ID="ami-0af256cec444be636"
There are no pre-built pod VM AMI for latest builds. You’ll need to follow these instructions to build the pod VM AMI. Once image build is finished then export image id to the environment variable PODVM_AMI_ID
.
If you have made changes to the CAA code that affects the pod VM image and you want to deploy those changes then follow these instructions to build the pod VM AMI. Once image build is finished then export image id to the environment variable PODVM_AMI_ID
.
CAA container image
Export the following environment variable to use the latest release image of CAA:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
export CAA_TAG="v${CAA_VERSION}-amd64"
Export the following environment variable to use the image built by the CAA CI on each merge to main:
export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
Find an appropriate tag of pre-built image suitable to your needs here.
export CAA_TAG=""
Caution: You can also use the
latest
tag but it is not recommended, because of its lack of version control and potential for unpredictable updates, impacting stability and reproducibility in deployments.
If you have made changes to the CAA code and you want to deploy those changes then follow these instructions to build the container image. Once the image is built export the environment variables CAA_IMAGE
and CAA_TAG
.
Create the AWS credentials file
cat <<EOF > install/overlays/aws/aws-cred.env
AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
EOF
Note: The values should be without quotes
Select peer-pods machine type
export PODVM_INSTANCE_TYPE="m6a.large"
export DISABLECVM="false"
Find more AMD SEV-SNP machine types on this AWS documentation.
export PODVM_INSTANCE_TYPE="t3.large"
export DISABLECVM="true"
Populate the kustomization.yaml
file
Run the following command to update the kustomization.yaml
file:
cat <<EOF > install/overlays/azure/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../yamls
images:
- name: cloud-api-adaptor
newName: "${CAA_IMAGE}"
newTag: "${CAA_TAG}"
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: peer-pods-cm
namespace: confidential-containers-system
literals:
- CLOUD_PROVIDER="aws"
- DISABLECVM="${DISABLECVM}"
- VXLAN_PORT="${VXLAN_PORT}"
- PODVM_AMI_ID="${PODVM_AMI_ID}"
- PODVM_INSTANCE_TYPE="${PODVM_INSTANCE_TYPE}"
secretGenerator:
- name: peer-pods-secret
namespace: confidential-containers-system
envs:
- aws-cred.env
Deploy CAA on the Kubernetes cluster
Label the cluster nodes with node.kubernetes.io/worker=
for NODE_NAME in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do
kubectl label node $NODE_NAME node.kubernetes.io/worker=
done
Deploy the coco operator. Usually it’s the same version as CAA, but it can be adjusted.
export COCO_OPERATOR_VERSION="${CAA_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/release?ref=v${COCO_OPERATOR_VERSION}"
kubectl apply -k "github.com/confidential-containers/operator/config/samples/ccruntime/peer-pods?ref=v${COCO_OPERATOR_VERSION}"
Run the following command to deploy CAA:
kubectl apply -k "install/overlays/aws"
Generic CAA deployment instructions are also described here.
Run sample application
Ensure runtimeclass is present
Verify that the runtimeclass
is created after deploying CAA:
kubectl get runtimeclass
Once you can find a runtimeclass
named kata-remote
then you can be sure that the deployment was successful. A successful deployment will look like this:
$ kubectl get runtimeclass
NAME HANDLER AGE
kata-remote kata-remote 7m18s
Deploy workload
Create an nginx
deployment:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
runtimeClassName: kata-remote
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
imagePullPolicy: Always
EOF
Ensure that the pod is up and running:
kubectl get pods -n default
You can verify that the peer pod VM was created by running the following command:
aws ec2 describe-instances --filters "Name=tag:Name,Values=podvm*" \
--query 'Reservations[*].Instances[*].[InstanceId, Tags[?Key==`Name`].Value | [0]]' --output table
Here you should see the VM associated with the pod nginx
.
Note: If you run into problems then check the troubleshooting guide here.
Cleanup
Delete all running pods using the runtimeclass kata-remote
. You can use the following command for the same:
kubectl get pods -A -o custom-columns='NAME:.metadata.name,NAMESPACE:.metadata.namespace,RUNTIMECLASS:.spec.runtimeClassName' | grep kata-remote | awk '{print $1, $2}'
Verify that all peer-pod VMs are deleted. You can use the following command to list all the peer-pod VMs
(VMs having prefix podvm
) and status:
aws ec2 describe-instances --filters "Name=tag:Name,Values=podvm*" \
--query 'Reservations[*].Instances[*].[InstanceId, Tags[?Key==`Name`].Value | [0], State.Name]' --output table
Delete the EKS cluster by running the following command:
eksctl delete cluster --name=$EKS_CLUSTER_NAME