Alibaba Cloud

Cloud API Adaptor (CAA) on Alibaba Cloud

This documentation will walk you through setting up CAA (a.k.a. Peer Pods) on Alibaba Cloud Container Service for Kubernetes (ACK) and Alibaba Cloud Elastic Compute Service (ECS). It explains how to deploy:

  • One worker for ACK Managed Cluster
  • CAA on that Kubernetes cluster
  • An Nginx pod backed by CAA pod VM on ECS

Note: Run the following commands from the following directory - src/cloud-api-adaptor

Note: Now Confidential Computing instances are available in some regions.

Note: Official document from Alibaba Cloud can be found here.

Prerequisites

Install Required Tools:

Create pod VM Image

Note: There is a pre-built Community Image (id:m-2ze1w9aj2aonwckv64cw) for version 0.13.0 in cn-beijing that you can use for testing.

If you want to build a pod VM image yourself, please follow the steps.

  1. Create pod VM image.
  PODVM_DISTRO=alinux \
  CLOUD_PROVIDER=alibabacloud \
  IMAGE_URL=https://alinux3.oss-cn-hangzhou.aliyuncs.com/aliyun_3_x64_20G_nocloud_alibase_20250117.qcow2 \
  make podvm-builder podvm-binaries podvm-image

The built image will be available in the root path of following newly built docker image: quay.io/confidential-containers/podvm-alibabacloud-alinux-amd64:<sha256> with name like podvm-*.qcow2. You need to export it from the container image.

  1. Upload to OSS storage and create ECS Image.

You will then need to upload the Pod VM image to OSS (Object Storage Service).

export REGION_ID=<region-id>
export IMAGE_FILE=<path-to-qcow2-file>
export BUCKET=<OSS-bucket-name>
export OBJECT=<object-name>

aliyun oss cp ${IMAGE_FILE} oss://${BUCKET}/${OBJECT}

Then, mark the image file as an ECS Image

export IMAGE_NAME=$(basename ${IMAGE_FILE%.*})
aliyun ecs ImportImage --ImageName ${IMAGE_NAME} \
    --region ${REGION_ID} --RegionId ${REGION_ID}
    --BootMode UEFI \
    --DiskDeviceMapping.1.OSSBucket ${BUCKET} --DiskDeviceMapping.1.OSSObject ${OBJECT} \
    --Features.NvmeSupport supported \
    --method POST --force

export POD_IMAGE_ID=<ImageId>

Build CAA development image

If you want to build CAA DaemonSet image yourself:

export registry=<registry-address>
export RELEASE_BUILD=true
export CLOUD_PROVIDER=alibabacloud
make image

After that you should take note of the tag used for this image, we will use it later.

Deploy Kubernetes using ACK Managed Cluster

  1. Create ACK Managed Cluster.
  export CONTAINER_CIDR=172.18.0.0/16
  export REGION_ID=cn-beijing
  export ZONES='["cn-beijing-i"]'

  aliyun cs CreateCluster --header "Content-Type=application/json" --body "
  {
    \"cluster_type\":\"ManagedKubernetes\",
    \"name\":\"caa\",
    \"region_id\":\"${REGION_ID}\",
    \"zone_ids\":${ZONES},
    \"enable_rrsa\":true,
    \"container_cidr\":\"${CONTAINER_CIDR}\",
    \"addons\":[
      {
        \"name\":\"flannel\"
      }
    ]
  }"

  export CLUSTER_ID=<cluster-id>
  export SECURITY_GROUP_ID=$(aliyun cs DescribeClusterDetail --ClusterId ${CLUSTER_ID} | jq -r  ".security_group_id")

Wait for the cluster to be created. Get the vSwitch id of the cluster. Then add one worker node to the cluster.

  1. Add Internet access for the cluster VPC
  export VPC_ID=$(aliyun cs DescribeClusterDetail --ClusterId ${CLUSTER_ID} | jq -r ".vpc_id")
  export VSWITCH_ID=$(echo ${VSWITCH_IDS} | sed 's/[][]//g' | sed 's/"//g')
  aliyun vpc CreateNatGateway \
    --region ${REGION_ID} \
    --RegionId ${REGION_ID} \
    --VpcId ${VPC_ID} \
    --NatType Enhanced \
    --VSwitchId ${VSWITCH_ID} \
    --NetworkType internet

  export GATEWAY_ID="<NatGatewayId>"
  export SNAT_TABLE_ID="<SnatTableId>"

  # The band width of the public ip (Mbps)
  export BAND_WIDTH=5
  aliyun vpc AllocateEipAddress \
    --region ${REGION_ID} \
    --RegionId ${REGION_ID} \
    --Bandwidth ${BAND_WIDTH}

  export EIP_ID="<AllocationId>"
  export EIP_ADDRESS="<EipAddress>"

  aliyun vpc AssociateEipAddress \
    --region ${REGION_ID} \
    --RegionId ${REGION_ID} \
    --AllocationId ${EIP_ID} \
    --InstanceId ${GATEWAY_ID} \
    --InstanceType Nat

  aliyun vpc CreateSnatEntry \
    --region ${REGION_ID} \
    --RegionId ${REGION_ID} \
    --SnatTableId ${SNAT_TABLE_ID} \
    --SourceVSwitchId ${VSWITCH_ID} \
    --SnatIp ${EIP_ADDRESS}
  1. Grant role permissions Give role permission to the cluster to allow the worker to create ECS instances.

Deploy the CAA Helm Chart

Download the CAA Helm deployment artifacts

export CAA_VERSION="0.17.0"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/tags/v${CAA_VERSION}.tar.gz"
tar -xvzf "v${CAA_VERSION}.tar.gz"
cd "cloud-api-adaptor-${CAA_VERSION}/src/cloud-api-adaptor/install/charts/peerpods"
export CAA_BRANCH="main"
curl -LO "https://github.com/confidential-containers/cloud-api-adaptor/archive/refs/heads/${CAA_BRANCH}.tar.gz"
tar -xvzf "${CAA_BRANCH}.tar.gz"
cd "cloud-api-adaptor-${CAA_BRANCH}/src/cloud-api-adaptor/install/charts/peerpods"

This assumes that you already have the code ready to use. On your terminal change directory to the Cloud API Adaptor’s code base.

Export PodVM image version

Exports the PodVM image ID used by peer pods. This variable tells the deployment tooling which PodVM image version to use when creating peer pod virtual machines in Alibaba Cloud.

export IMAGEID="m-2zef6zaa0j0qz3sunhjp"

Note: Alibaba Cloud builds the images ahead of time. Different regions has different image id to use.

region IMAGEID
cn-beijing m-2zef6zaa0j0qz3sunhjp
ap-southeast-1 m-t4n9ocuen5sy6rhbxbk1

Export CAA container image path

Define the Cloud API Adaptor (CAA) container image to deploy. These variables tell the deployment tooling which CAA image and architecture-specific tag to pull and run. The tag is derived from the CAA release version to ensure compatibility with the selected PodVM image and configuration.

Export the following environment variable to use the latest release image of CAA:

export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"
export CAA_TAG="v${CAA_VERSION}-amd64"

Export the following environment variable to use the image built by the CAA CI on each merge to main:

export CAA_IMAGE="quay.io/confidential-containers/cloud-api-adaptor"

Find an appropriate tag of pre-built image suitable to your needs here.

export CAA_TAG=""

Caution: You can also use the latest tag but it is not recommended, because of its lack of version control and potential for unpredictable updates, impacting stability and reproducibility in deployments.

If you have made changes to the CAA code and you want to deploy those changes then follow these instructions to build the container image. Once the image is built export the environment variables CAA_IMAGE and CAA_TAG.

Select peer-pods machine type

export PODVM_INSTANCE_TYPE="ecs.g8i.xlarge"
export DISABLECVM="false"

Note: See the official document for more instance types that supports confidential computing.

Populate the providers/alibabacloud.yaml file

List of all available configuration options can be found in two places:

Run the following command to update the providers/alibabacloud.yaml file:

cat <<EOF > providers/alibabacloud.yaml
provider: alibabacloud
image:
  name: "${CAA_IMAGE}"
  tag: "${CAA_TAG}"
providerConfigs:
   alibabacloud:
      IMAGEID: "${IMAGEID}"
      REGION: "${REGION_ID}"
      SECURITY_GROUP_IDS: "${SECURITY_GROUP_ID}"
      VSWITCH_ID: "${VSWITCH_ID}"
      DISABLECVM: ${DISABLECVM}
alibabacloud:
  rrsa:
    enable: true
EOF

Note: If you are not using RRSA for auth, please change the alibabacloud.rrsa.enable to false in the yaml.

Deploy helm chart on the Kubernetes cluster

  1. Create namespace managed by Helm:

    kubectl apply -f - << EOF
    apiVersion: v1
    kind: Namespace
    metadata:
     name: confidential-containers-system
     labels:
       app.kubernetes.io/managed-by: Helm
     annotations:
       meta.helm.sh/release-name: peerpods
       meta.helm.sh/release-namespace: confidential-containers-system
    EOF
    
  2. Create the secret using kubectl:

    See providers/alibabacloud-secrets.yaml.template for required keys.

    Note: Below example assumes that you are using RRSA for auth hence ALIBABACLOUD_ACCESS_KEY_ID and ALIBABACLOUD_ACCESS_KEY_SECRET are not provided, while ALIBABA_CLOUD_ROLE_ARN and ALIBABA_CLOUD_OIDC_PROVIDER_ARN are provided.

    kubectl create secret generic my-provider-creds \
    -n confidential-containers-system \
    --from-literal=ALIBABA_CLOUD_ROLE_ARN=${ALIBABA_CLOUD_ROLE_ARN} \
    --from-literal=ALIBABA_CLOUD_OIDC_PROVIDER_ARN=${ALIBABA_CLOUD_OIDC_PROVIDER_ARN} \
    --from-literal=ALIBABA_CLOUD_OIDC_TOKEN_FILE=/var/run/secrets/ack.alibabacloud.com/rrsa-tokens/token
    
  3. Install helm chart:

    Below command uses customization options -f and --set which are described here.

    helm install peerpods . \
      -f providers/alibabacloud.yaml \
      --set secrets.mode=reference \
      --set secrets.existingSecretName=my-provider-creds \
      --dependency-update \
      -n confidential-containers-system
    

Generic Peer pods Helm charts deployment instructions are also described here.

Run sample application

Ensure runtimeclass is present

Verify that the runtimeclass is created after deploying CAA:

kubectl get runtimeclass

Once you can find a runtimeclass named kata-remote then you can be sure that the deployment was successful. A successful deployment will look like this:

$ kubectl get runtimeclass
NAME          HANDLER       AGE
kata-remote   kata-remote   7m18s

Deploy workload

Create an nginx deployment:

echo '
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  runtimeClassName: kata-remote
  containers:
  - name: nginx
    image: registry.openanolis.cn/openanolis/nginx:1.14.1-8.6
' | kubectl apply -f -

Ensure that the pod is up and running:

kubectl get pods -n default

You can verify that the peer-pod VM was created by running the following command:

aliyun ecs DescribeInstances --RegionId ${REGION_ID} --InstanceName 'podvm-*'

Here you should see the VM associated with the pod nginx. If you run into problems then check the troubleshooting guide here.

Attestation

TODO

Cleanup

Delete all running pods using the runtimeClass kata-remote.

Verify that all peer-pod VMs are deleted. You can use the following command to list all the peer-pod VMs (VMs having prefix podvm) and status:

aliyun ecs DescribeInstances --RegionId ${REGION_ID} --InstanceName 'podvm-*'

Delete the ACK cluster by running the following command:

aliyun cs DELETE /clusters/${CLUSTER_ID} --region ${REGION_ID} --keep_slb false --retain_all_resources false --header "Content-Type=application/json;" --body "{}"