This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Attestation
Trusted Components for Attestation and Secret Management
Before a confidential workload is granted access to sensitive data, it should be attested.
Attestation provides guarantees about the TCB, isolation properties, and root of trust of the enclave.
Confidential Containers uses Trustee to verify attestations and conditionally release secrets.
Trustee can be used to attest any confidential workloads. It is especially integrated with Confidential Containers.
Trustee should be deployed in a trusted environment, given that its role validating guests and releasing secrets
is inherently sensitive.
There are several ways to configure and deploy Trustee.
Architecture
Trustee is a composition of a few different services, which can be deployed in several different configurations.
This figure shows one common way to deploy these components in conjunction with certain guest components.
flowchart LR
AA -- attests guest ----> KBS
CDH -- requests resource --> KBS
subgraph Guest
CDH <.-> AA
end
subgraph Trustee
KBS -- validates evidence --> AS
RVPS -- provides reference values--> AS
end
client-tool -- configures --> KBS
Components
Component |
Name |
Purpose |
KBS |
Key Broker Service |
Facilitates attestation and conditionally releases secrets |
AS |
Attestation Service |
Validates hardware evidence |
RVPS |
Reference Value Provider Service |
Manages reference values |
CDH |
Confidential Data Hub |
Handles confidential operations in the guest |
AA |
Attestation Agent |
Gets hardware evidence in the guest |
KBS Protocol
Trustee and the guest components establish a secure channel in conjunction with attestation.
This connection follows the KBS protocol, which is specified here
1 - Installation
Installing Trustee
Trustee can be deployed in several different configurations.
Either way, Trustee should be deployed in a trusted environment.
This could be a local server, some trusted third party, or even another enclave.
Official support for deploying Trustee inside of Confidential Containers
is being developed.
1.1 - Trustee Operator
Installing Trustee on Kubernetes
Trustee can be installed on Kubernetes using the Trustee operator.
When running Trustee in Kubernetes with the operator, the cluster must be Trusted.
Install the operator
First, clone the Trustee operator.
git clone https://github.com/confidential-containers/trustee-operator.git
Install the operator.
make deploy IMG=quay.io/confidential-containers/trustee-operator:latest
Verify that the controller is running.
kubectl get pods -n trustee-operator-system --watch
The operator controller should be running.
NAME READY STATUS RESTARTS AGE
trustee-operator-controller-manager-6fb5bb5bd9-22wd6 2/2 Running 0 25s
Deploy Trustee
A simple configuration is provided.
You will need to generate an authentication key.
cd config/samples/microservices
# or config/samples/all-in-one for the integrated mode
# create authentication keys
openssl genpkey -algorithm ed25519 > privateKey
openssl pkey -in privateKey -pubout -out kbs.pem
# create all the needed resources
kubectl apply -k .
Check that the Trustee deployment is running.
kubectl get pods -n trustee-operator-system --selector=app=kbs
The Trustee deployment should be running.
NAME READY STATUS RESTARTS AGE
trustee-deployment-78bd97f6d4-nxsbb 3/3 Running 0 4m3s
Uninstall
Remove the Trustee CRD.
Remove the controller.
1.2 - Trustee in Docker
Installing Trustee on Kubernetes
Trustee can be installed using Docker Compose.
Installation
Clone the Trustee repo.
git clone https://github.com/confidential-containers/trustee.git
Setup authentication keys.
openssl genpkey -algorithm ed25519 > kbs/config/private.key
openssl pkey -in kbs/config/private.key -pubout -out kbs/config/public.pub
Run Trustee.
Uninstall
Stop Trustee.
2 - KBS Client Tool
Simple tool to test or configure Key Broker Service and Attestation Service
Trustee can be configured using the KBS Client tool.
Other parts of this documentation may assume that this tool is installed.
The KBS Client can also be used inside of an enclave to retrieve resources
from Trustee, either for testing, or as part of a confidential workload
that does not use Confidential Containers.
When using the KBS Client to retrieve resources, it must be built with an additional feature
which is described below.
Generally, the KBS Client should not be used for retrieving secrets from
inside a confidential container. Instead see, secret resources.
Install
The KBS Client tool can be installed in two ways.
With ORAS
Pull the KBS Client with ORAS.
oras pull ghcr.io/confidential-containers/staged-images/kbs-client:latest
chmod +x kbs-client
This version of the KBS Client does not support getting resources
from inside of an enclave.
From source
Clone the Trustee repo.
git clone https://github.com/confidential-containers/trustee.git
Build the client
cd kbs
make CLI_FEATURES=sample_only cli
sudo make install-cli
The sample_only
feature is used to avoid building hardware attesters into the KBS Client.
to retrieve resources.
If you would like to use the KBS Client inside of an enclave to retrieve secrets,
remove the sample_only
feature.
This will build the client with all attesters, which will require extra dependencies.
Usage
Other pages show how the client can be used for specific scenarios.
In general most commands will have the same form.
Here is an example command that sets a resource in the KBS.
kbs-client --url <url-of-kbs> config \
--auth-private-key <admin-private-key> set-resource \
--path <resource-name> --resource-file <path-to-resource-file>
URL
All kbs-client
commands must take a --url
flag.
This should point to the KBS.
Depending on how and where Trustee is deployed, the URL will be different.
For example, if you use the docker compose deployment, the KBS URL
will typically be the IP of your local node with port 8080.
Private Key
All kbs-client config
commands must take an --auth-private-key
flag.
Configuring the KBS is a privileged operation so the configuration endpoint
must be protected.
The admin private key is usually set before deploying Trustee.
Refer to the installation guide for where your private key is stored,
and point the client to it.
3 - Resources
Managing secret resources with Trustee
Trustee, and the KBS in particular, are built around providing secrets to workloads.
These secrets are fulfilled by secret backend plugins, the most common of which
is the resource plugin.
In practice the terms secret and resource are used interchangeably.
There are several ways to configure secret resources.
Identifying Resources
Confidential Containers and Trustee use the Resource URI scheme to identify resources.
A full Resource URI has the form kbs://<kbs_host>:<kbs_port>/<repository>/<type>/<tag>
,
but in practice the KBS Host and KBS Port are ignored to avoid coupling a resource
to a particular IP and port.
Instead, resources are typically expressed as
kbs:///<repository>/<type>/<tag>`
Often resources are referred to just as <repository>/<type>/<tag>
.
This is what the KBS Client refers to as the resource path.
Providing Resources to Trustee
By default, Trustee supports a few different ways of provisioning resources.
KBS Client
You can use the KBS Client to provision resources.
kbs-client --url <kbs-url> config \
--auth-private-key <admin-private-key> set-resource \
--path <resource-path> --resource-file <path-to-resource-file>
For more details about using the KBS Client, refer to the KBS Client page
Filesystem Backend
By default Trustee will store resources on the filesystem.
You can provision resources by placing them into the appropriate directory.
When using Kubernetes, you can inject a secret like this.
cat "$KEY_FILE" | kubectl exec -i deploy/trustee-deployment -n trustee-operator-system -- tee "/opt/confidential-containers/kbs/repository/${KEY_PATH}" > /dev/null
This approach can be extended to integrate the resource backend with other Kubernetes components.
Operator
If you are using the Trustee operator, you can specify Kubernetes secrets that will be propagated to the resource backend.
A secret can be added like this
kubectl create secret generic kbsres1 --from-literal key1=res1val1 --from-literal key2=res1val2 -n kbs-operator-system
Advanced configurations
There are additional plugins and additional backends for the resource plugin.
For example, Trustee can integrate with Azure Key Vault or PKCS11 HSMs.
3.1 - Resource Backends
Backends for resource storage
Resource Storage Backend
KBS stores confidential resources through a StorageBackend
abstraction specified
by a Rust trait. The StorageBackend
interface can be implemented for different
storage backends like e.g. databases or local file systems.
The KBS config file defines which resource backend KBS will use. The default is the local
file system (LocalFs
).
Local File System Backend
With the local file system backend default implementation, each resource
file maps to a KBS resource URL. The file path to URL conversion scheme is
defined below:
Resource File Path |
Resource URL |
file://<$(KBS_REPOSITORY_DIR)>/<repository_name>/<type>/<tag> |
https://<kbs_address>/kbs/v0/resource/<repository_name>/<type>/<tag> |
The KBS root file system resource path is specified in the KBS config file
as well, and the default value is /opt/confidential-containers/kbs/repository
.
Aliyun KMS
Alibaba Cloud KMS(a.k.a Aliyun KMS)
can also work as the KBS resource storage backend.
In this mode, resources will be stored with generic secrets in a KMS instance.
One KBS can be configured with a specified KMS instance in repository_config
field of KBS launch config.
These materials can be found in KMS instance’s AAP.
When being accessed, a resource URI of kbs:///repo/type/tag
will be translated into the generic secret with name tag
. Hinting that repo/type
field will be ignored.
Pkcs11
The Pkcs11 backend uses Pkcs11 to store plaintext resources
in an HSM.
Pkcs11 is a broad specification supporting many cryptographic operations.
Here we make use only of a small subset of these features.
Often with Pkcs11 an HSM is used to wrap and unwrap keys or store wrapped keys.
Here we do something simpler. Since the KBS expects resources to be
in plaintext, we store these resources in the HSM as secret keys
of the generic secret type.
This storage backend will provision resource to the HSM
in the expected way when a user uploads a resource to the KBS.
The user must simply specify the location of an initialized HSM slot.
Keys can also be provisioned to the HSM separately
but they must have the expected attributes.
The Pkcs11 backend is configured with the following values.
module
The module path should point to a binary implementing Pkcs11 for the HSM
that you want to use. For example, if you are using SoftHSM
, you might
set the module path to /usr/local/lib/softhsm/libsofthsm2.so
.
slot_index
The slot index points to the slot in your HSM where the secrets will be stored.
The slot must be initialized before starting the KBS.
No slot_index
is set, the first slot will be used.
pin
The user password for authenticating a session with the above slot.
3.2 - Azure Key Vault Integration
This documentation describes how to mount secrets stored in Azure Key Vault into a KBS deployment
Premise
AKS
We assume an AKS cluster configured with Workload Identity and Key Vault Secrets Provider. The former provides a KBS pod with the privileges to access an Azure Key Vault (AKV) instance. The latter is an implementation of Kubernetes’ Secret Store CSI Driver, mapping secrets from external key vaults into pods. The guides below provide instructions on how to configure a cluster accordingly:
AKV
There should be an AKV instance that has been configured with role based access control (RBAC), containing two secrets named coco_one
coco_two
for the purpose of the example. Find out how to configure your instance for RBAC in the guide below.
Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control
Note: You might have to toggle between Access Policy and RBAC modes to create your secrets on the CLI or via the Portal if your user doesn’t have the necessary role assignments.
CoCo
While the steps describe a deployment of KBS, the configuration of a Confidential Containers environment is out of scope for this document. CoCo should be configured with KBS as a Key Broker Client (KBC) and the resulting KBS deployment should be available and configured for confidential pods.
Azure environment
Configure your Resource group, Subscription and AKS cluster name. Adjust accordingly:
export SUBSCRIPTION_ID="$(az account show --query id -o tsv)"
export RESOURCE_GROUP=my-group
export KEYVAULT_NAME=kbs-secrets
export CLUSTER_NAME=coco
Instructions
Create Identity
Create a User managed identity for KBS:
az identity create --name kbs -g "$RESOURCE_GROUP"
export KBS_CLIENT_ID="$(az identity show -g "$RESOURCE_GROUP" --name kbs --query clientId -o tsv)"
export KBS_TENANT_ID=$(az aks show --name "$CLUSTER_NAME" --resource-group "$RESOURCE_GROUP" --query identity.tenantId -o tsv)
Assign a role to access secrets:
export KEYVAULT_SCOPE=$(az keyvault show --name "$KEYVAULT_NAME" --query id -o tsv)
az role assignment create --role "Key Vault Administrator" --assignee "$KBS_CLIENT_ID" --scope "$KEYVAULT_SCOPE"
Namespace
By default KBS is deployed into a coco-tenant
Namespace:
export NAMESPACE=coco-tenant
kubectl create namespace $NAMESPACE
KBS identity and Service Account
Workload Identity provides individual pods with IAM privileges to access Azure infrastructure resources. An azure identity is bridged to a Service Account using OIDC and Federated Credentials. Those are scoped to a Namespace, we assume we deploy the Service Account and KBS into the default
Namespace, adjust accordingly if necessary.
export AKS_OIDC_ISSUER="$(az aks show --resource-group "$RESOURCE_GROUP" --name "$CLUSTER_NAME" --query "oidcIssuerProfile.issuerUrl" -o tsv)"
az identity federated-credential create \
--name kbsfederatedidentity \
--identity-name kbs \
--resource-group "$RESOURCE_GROUP" \
--issuer "$AKS_OIDC_ISSUER" \
--subject "system:serviceaccount:${NAMESPACE}:kbs"
Create a Service Account object and annotate it with the identity’s client id.
cat <<EOF> service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: ${KBS_CLIENT_ID}
name: kbs
namespace: ${NAMESPACE}
EOF
kubectl apply -f service-account.yaml
Secret Provider Class
A Secret Provider Class specifies a set of secrets that should be made available to k8s workloads.
cat <<EOF> secret-provider-class.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: ${KEYVAULT_NAME}
namespace: ${NAMESPACE}
spec:
provider: azure
parameters:
usePodIdentity: "false"
clientID: ${KBS_CLIENT_ID}
keyvaultName: ${KEYVAULT_NAME}
objects: |
array:
- |
objectName: coco_one
objectType: secret
- |
objectName: coco_two
objectType: secret
tenantId: ${KBS_TENANT_ID}
EOF
kubectl create -f secret-provider-class.yaml
Deploy KBS
The default KBS deployment needs to be extended with label annotations and CSI volume. The secrets are mounted into the storage hierarchy default/akv
.
git clone https://github.com/confidential-containers/kbs.git
cd kbs
git checkout v0.8.2
cd kbs/config/kubernetes
mkdir akv
cat <<EOF> akv/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: coco-tenant
resources:
- ../base
patches:
- path: patch.yaml
target:
group: apps
kind: Deployment
name: kbs
version: v1
EOF
cat <<EOF> akv/patch.yaml
- op: add
path: /spec/template/metadata/labels/azure.workload.identity~1use
value: "true"
- op: add
path: /spec/template/spec/serviceAccountName
value: kbs
- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value:
name: secrets
mountPath: /opt/confidential-containers/kbs/repository/default/akv
readOnly: true
- op: add
path: /spec/template/spec/volumes/-
value:
name: secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: ${KEYVAULT_NAME}
EOF
kubectl apply -k akv/
Test
The KBS pod should be running, the pod events should give indication of possible errors. From a confidential pod the AKV secrets should be retrievable via Confidential Data Hub:
$ kubectl exec -it deploy/nginx-coco -- curl http://127.0.0.1:8006/cdh/resource/default/akv/coco_one
a secret
4 - Policies
Controlling the KBS with policies
Trustee allows users to create policies that govern when secrets are released.
Trustee has two different policies, the resource policies, and the attestation policy,
which serve distinct purposes.
Resource policies control which secrets are released and are generally scoped to the workload.
Attestation policies define how TCB claims are compared to reference values to determine
if the enclave is in a valid state. Generally this policy reflects the enclave boot flow.
Both policies use OPA.
Provisioning Policies
Both types of policies can be provisioned with the KBS Client.
kbs-client --url <kbs-url> config \
--auth-private-key <admin-private-key> set-resource-policy \
--policy-file <path-to-policy-file>
The attestation policy is set in a similar manner.
kbs-client --url <kbs-url> config \
--auth-private-key <admin-private-key> set-attestation-policy \
--policy-file <path-to-policy-file>
Resource Policies
Resource policies, also known as KBS Policies, control whether a resource is released to a guest.
The input to the resource policies is the URI of the resource that is being requested and the
attestation token which was generated by attestation service for the guest making the request.
Basic Policies
The simplest possible policies either allow or reject all requests.
package policy
default allow = true
Allowing all requests is generally not secure.
By default the resource policy allows all requests as long as the evidence
does not come from the sample attester.
This means that some TEE must have been used to request the secret
although it makes no guarantees about the TCB.
If you are testing Trustee without a TEE (with the sample evidence)
the default policy will block all of your requests.
Usually the policy should check if the attestation token represents a valid TCB.
The attestation token is an EAR token, so we can check if it is contraindicated.
The status
field in the EAR token represents an AR4SI Trustworthiness Tier.
There are 4 tiers: Contraindicated, Warning, Affirming,
and None.
Ideally secrets should only be released when the token affirms the guest TCB.
package policy
import rego.v1
default allow = false
allow if {
input["submods"]["cpu"]["ear.status"] != "contraindicated"
}
Usually the attestation service must be provisioned with reference values to return
attestation tokens that are not contraindicated. This is described in upcoming sections.
A more advanced policy could check that the token is not contraindicated and that the enclave
is of a certain type. For example, this policy will only allow requests if the evidence
is not contraindicated and comes from an SNP guest.
package policy
import rego.v1
default allow = false
allow if {
input["submods"]["cpu"]["ear.status"] != "contraindicated"
input["submods"]["cpu"]["ear.veraison.annotated-evidence"]["snp"]
}
Advanced Policies
The EAR attestation token offers a generic but specific description of the guest TCB status.
In addition to whether or not a module (such as the CPU) is contraindicated, an EAR appraisal
can address eight different facets of the TCB.
These are instance_identity
, configuration
, executables
, file_system
, hardware
,
runtime_opaque
, storage_opaque
, sourced_data
.
Not all of these vectors is in scope for confidential containers.
See the next section for how these vectors are calculated.
A resource policy can check each of these values.
For instance this policy builds on the previous one to make sure that in addition
to not being contraindicated, the executables trust vector has a particular claim.
package policy
import rego.v1
default allow = false
allow if {
input["submods"]["cpu"]["ear.status"] != "contraindicated"
input["submods"]["cpu"]["ear.veraison.annotated-evidence"]["snp"]
input["submods"]["cpu"]["ear.status.executables"] == 2
}
In AR4SI and EAR, numerical trust claims are assigned specific meanings.
For instance, for the executables
trust claim the value 2 stands for
“Only a recognized genuine set of approved executables have been loaded during the boot process.”
A full listing of trust vectors and their meanings can be found here.
The policy also takes the requested resource URI as input so the policy can have different behavior depending
on which resource is requested.
Here is a basic policy checking which resource is requested.
package policy
import rego.v1
default allowed = false
path := split(data["resource-path"], "/")
allowed if {
path[0] == "red"
}
This policy only allows requests to certain repositories.
This technique can be combined with those above.
For instance, you could write a policy that allows different resources on different platforms,
or requires different trust claims for different secrets.
package policy
import rego.v1
default allowed = false
path := split(data["resource-path"], "/")
allowed if {
path[0] == "red"
input["submods"]["cpu"]["ear.status"] != "contraindicated"
input["submods"]["cpu"]["ear.veraison.annotated-evidence"]["snp"]
}
allowed if {
path[0] == "blue"
input["submods"]["cpu"]["ear.status"] != "contraindicated"
input["submods"]["cpu"]["ear.veraison.annotated-evidence"]["tdx"]
}
Finally, policies can access guest init-data if it has been specified.
Init-data is a measured field set by the host on boot.
Init-data allows dynamic, measured configurations to be propagated to a guest.
Those configurations can also be provided to Trustee as part of the KBS protocol.
If they are, policies can check the init-data, which is located at submods.cpu.ear.veraison.annotated-evidence.init_data_claims
.
This is only supported on platforms that have a hardware init-data field
(such as HostData
on SEV-SNP or mrconfigid/mrowner/mrownerconfig
on TDX).
Accessing init-data from policies can be extremely powerful because users can specify whatever they want to in the init-data.
This allows users to create their own schemes for identifying guests.
For instance, the init-data could be used to assign each guest a UUID or a workload class.
Attestation Policies
Attestation policies are what the attestation service uses to calculate EAR trust vectors
based on the TCB claims extracted from the hardware evidence by the verifiers.
Essentially the AS policy defines which parts of the evidence are important
and how the evidence should be compared to reference values.
The default attestation policy already defines this relationship for TDX and SNP guests
booted by the Kata shim and running Kata guests.
If you are using Confidential Containers with these platforms you probably do not need
to change this policy.
If you are using Trustee to boot different types of guests, you might want to adjust the AS policy
to capture your TCB.
Either way, you’ll need to provide the reference values that the policy expects.
Take a look at the default policy
to see which values are expected.
You only need to provision the reference values for the platform that you are using.
5 - Reference Values
Managing Reference Values with the RVPS
Reference values are used by the attestation service, to generate an attestation token.
More specifically the attestation policy specifies how reference values are compared to TCB claims
(extracted from the HW evidence by the attesters).
Provisioning Reference Values
There are multiple ways to provision reference values.
The RVPS provides a client tool for providing reference values.
This is separate form the KBS Client because the reference value provider
might be a different party than the administrator of Trustee.
You can build the RVPS Tool alongside the RVPS.
git clone https://github.com/confidential-containers/trustee
cd trustee/rvps
make
The RVPS generally expects reference values to be delivered via signed messages.
For testing, you can create a sample message that is not signed.
cat << EOF > sample
{
"test-binary-1": [
"reference-value-1",
"reference-value-2"
],
"test-binary-2": [
"reference-value-3",
"reference-value-4"
]
}
EOF
provenance=$(cat sample | base64 --wrap=0)
cat << EOF > message
{
"version" : "0.1.0",
"type": "sample",
"payload": "$provenance"
}
EOF
You can then provision this message to the RVPS using the RVPS Tool
rvps-tool register --path ./message --addr <address-of-rvps>
If you’ve deployed Trustee via docker compose, the RVPS address should be 127.0.0.1:50003
.
You can also query which reference values have been registered.
rvps-tool query --addr <address-of-rvps>
This provisioning flow is being refined.
Operator
If you are using the Trustee operator, you can provision reference values through Kubernetes.
kubectl apply -f - << EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: rvps-reference-values
namespace: kbs-operator-system
data:
reference-values.json: |
[
]
EOF