Live Demo: DB encryption proxy in a TEE

→ Databases are one breach away from full exposure because they store plaintext data. → Most cloud databases are readable by the people who operate them.

Here is the architecture that makes both irrelevant.

Our architecture encrypts each sensitive field inside a hardware-isolated Intel TDX enclave before it ever reaches Postgres. The cloud provider cannot access your plaintext or your keys. Postgres only stores encrypted data.

Field-Level Encryption

Individual columns are encrypted by AcraAcra is an open-source database encryption proxy by Cossack Labs. It intercepts SQL queries, encrypts sensitive fields before they reach Postgres, and decrypts them transparently on read — all inside the TEE. docs.cossacklabs.com/acra → (DB encryption proxy) before reaching Postgres. The database stores encrypted ciphertext. Without the proxy, it cannot be read.

TEE & Hardware Attestation

Intel TDX isolates the enclave from the host OS, hypervisor, and cloud operator. A cryptographic quote from the TDX chip proves the exact binary executing inside. Any third party can verify it.

Hardware-Bound Key Management

The AcraAcra is an open-source database encryption proxy by Cossack Labs. It intercepts SQL queries, encrypts sensitive fields before they reach Postgres, and decrypts them transparently on read — all inside the TEE. docs.cossacklabs.com/acra → master encryption key is sealed to this exact CVM/TEE via dstackdstack is the open framework for running containers in Confidential VMs (Intel TDX). It handles attestation, per-app key derivation, encrypted storage, and TLS — bring your Docker Compose as-is. A Linux Foundation Confidential Computing Consortium project. github.com/Dstack-TEE/dstack →'s decentralized KMS, built on DKGDKG (Distributed Key Generation) is an MPC protocol where multiple independent TEE nodes jointly compute a secret key — no single node ever holds it in full. In dstack's KMS, this means the master key is never concentrated in one place, survives individual hardware failure, and cannot be extracted by any single operator or cloud provider.. The key never exists outside the TEE and is re-derived on every deployment.

Proof of Cloud Alliance

A vendor-neutral alliance maintaining a signed, append-only registry of cloud hardware identities for Intel TDX. Provides independent verification that the hardware running this demo is real and trusted. proofofcloud.org →

The demo runs on a shared dstackdstack is the open framework for running containers in Confidential VMs (Intel TDX). It handles attestation, per-app key derivation, encrypted storage, and TLS — bring your Docker Compose as-is. A Linux Foundation Confidential Computing Consortium project. github.com/Dstack-TEE/dstack → CVM/TEE (Intel TDX). Click below to start the secure environment. It takes about 2 minutes.

Use fictional data only. This is a shared environment. The CVM/TEE stops automatically after 15 minutes of inactivity.

Frequently Asked Questions

Common questions about field-level encryption, Trusted Execution Environments, and how this demo works.

What is a Trusted Execution Environment (TEE)?

A TEE is a hardware-enforced isolated environment inside a processor. Code and data loaded into a TEE are protected from the host operating system, hypervisor, and cloud operator — even if those layers are compromised. Intel TDX (Trust Domain Extensions) is a hardware TEE technology that isolates entire virtual machines rather than individual processes.

In this project, AcraServer (the encryption proxy) runs inside an Intel TDX TEE on Phala Cloud. This means the cloud provider operating the server cannot access the encryption keys or the plaintext data — the hardware enforces the isolation.

What is field-level encryption, how does it differ from disk encryption?

Disk encryption (TDE) protects data at rest on the storage medium. When the database engine runs, it decrypts the data and works with plaintext in memory — so anyone with database access (database administrators, cloud admins, attackers who breach the database service) can read everything.

Field-level encryption encrypts individual columns before they reach the database engine. The database stores opaque ciphertext. Even with full database access, the data is unreadable. Decryption only happens inside the TEE, through the encryption proxy.

Searching encrypted fields is supported for exact-match queries using searchable encryption. The proxy prepends an HMAC-SHA256 index to each encrypted value. On a WHERE email = 'alice@example.com' query, the proxy computes the same HMAC and rewrites the query accordingly — PostgreSQL runs an indexed comparison without ever seeing the plaintext value. Only exact-match (=) is supported; LIKE and range queries are not.

Can the cloud provider or a database administrator read the data?

No — and this is the core property the architecture is designed to enforce. PostgreSQL only ever stores AcraBlock ciphertext. A database administrator with direct database access, a cloud provider employee, or an attacker who extracts a full database dump all see the same thing: encrypted bytes. Without AcraServer running inside the TEE, the data cannot be decrypted.

The TEE hardware enforces that AcraServer's memory (where the decrypted key material briefly exists) is inaccessible to the host OS and cloud operator. The attestation report proves this guarantee holds for the specific binary deployed.

How are the encryption keys protected?

The master key never exists in plaintext outside the TEE. The protection chain works as follows:

  1. On first boot, a random 32-byte master key is generated with acra-keymaker.
  2. A sealing key is derived from the TEE's hardware identity via dstack's decentralized KMS (DKG-based). This key is deterministic for the exact Docker Compose hash — changing the compose file changes the sealing key.
  3. The master key is encrypted with the sealing key (AES-256-GCM, random nonce) into a sealed blob stored on a persistent volume. The sealed blob is useless without the same sealing key.
  4. On subsequent boots, the sealed blob is decrypted in TEE memory, the sealing key material is zeroed, and AcraServer inherits the master key via environment variable — which it clears after reading.
What is hardware attestation, and why does it matter?

Hardware attestation is a cryptographic proof, generated by the TEE CPU itself, that a specific piece of code is running unmodified inside a genuine Intel TDX enclave. The TDX chip measures the boot sequence, Docker image digests, and compose file hash into Runtime Measurement Registers (RTMRs). The resulting TDX quote is signed by the CPU's hardware key.

This means any third party can confirm that: (1) the hardware is real TDX, (2) the specific Docker Compose file deployed matches the hash in the attestation, and (3) nothing in the boot chain was tampered with. Without attestation, you are trusting the cloud provider's word. With attestation, you have cryptographic proof.

What happens if the database is stolen or breached?

The attacker gets a collection of AcraBlock ciphertext — binary blobs encrypted with AES-256-GCM. Without the master key (which never leaves the TEE), the data is computationally unreadable. There is no key stored in or near the database.

This is the key inversion of the usual threat model: in most architectures, stealing the database means stealing the data. In this architecture, stealing the database yields encrypted garbage. The master key is sealed to a specific TEE hardware identity and cannot be extracted even with physical access to the server.

Is this production-ready?

This is a proof-of-concept and portfolio demonstration, not a production system. Several hardening items are intentionally deferred: full CA verification on the AcraServer → PostgreSQL egress path, authentication on the health API port (9191), APP_LAUNCH_TOKEN validation, and log level hardening.

The core cryptographic architecture — envelope encryption, mTLS access control, sealed key management, hardware attestation — is production-grade in its design and is the component this project is intended to demonstrate. If you are evaluating this architecture for production, speak to a Katvio representative to discuss hardening requirements and deployment scope.

Where can I find the source code and documentation?

The source code and technical documentation for this project are kept private. Katvio does not publicly disclose the implementation details.

If you would like to learn more about the architecture, request a technical walkthrough, or discuss how this solution could apply to your use case, please reach out through Katvio or use the Speak to a Katvio representative button above.

What else can TEEs do?

This demo showcases one pattern — an encryption proxy with keys sealed to a TEE — but that same primitive (a secret that only exists inside attested hardware) unlocks a wide range of architectures. Here are the major categories.

Confidential AI & ML Inference

A model owner deploys a proprietary model inside a TEE. Weights are encrypted at rest and only decrypted inside the enclave — the cloud provider never sees them. The hardware-bound key protects both model IP and user data simultaneously. Key for healthcare diagnostics on patient data, fraud detection across institutions, or any scenario where neither the model operator nor the data owner should see each other's assets.

Multi-Party Data Clean Rooms

Two or more organisations want to jointly compute something — ad attribution across a publisher and an advertiser, drug interaction analysis across pharma companies — without revealing raw datasets to each other. Each party encrypts their data to the TEE's identity. The enclave decrypts both datasets, runs the attested computation, and outputs only the aggregate result. Cloning the VM or snapshotting the disk yields nothing.

Private Key Custody & Signing

A signing service runs inside a TEE: the private key is generated inside the enclave and sealed to it. Transaction signing happens in hardware isolation — the cloud operator, even with root on the host, cannot extract the key. The attestation lets counterparties verify that the signing policy code hasn't been tampered with. This is the architecture behind several MPC wallet providers and oracle networks that sign on-chain data feeds.

Confidential Secret Management

A hardware-isolated Vault. A secrets manager runs inside a TEE; the unseal key is derived from the hardware identity rather than from Shamir shares held by humans. On boot, the TEE re-derives the key, unseals the vault, and serves secrets over mTLS — no human ever holds the root key material. The same pattern applies to private CAs: the CA signing key is sealed to the TEE, so even the infrastructure team cannot extract it.

Confidential Network Infrastructure

A DNS resolver inside a TEE can prove via attestation that it isn't logging queries or injecting responses. A VPN termination point inside a TEE guarantees that the operator cannot inspect decrypted traffic. The sealed key is the TLS private key or the VPN session key — it never leaves the enclave, so the hosting provider is cryptographically excluded from the data plane.

Auditable Compliance Pipelines

For GDPR, HIPAA, PCI-DSS, or SOC2 scenarios where you need to prove that a specific computation was performed on sensitive data without human access. A TEE running an attested audit pipeline processes PII, generates a compliance report, and outputs only anonymised or aggregated results. The attestation quote becomes the audit artefact — a cryptographic proof of exactly what code touched the data, on what hardware, at what time.

Federated Learning Coordination

A central aggregator runs inside a TEE. Each participant encrypts their model gradients to the TEE's public key. The aggregator decrypts all gradients inside the enclave, computes the federated average, and publishes only the updated global model. No participant's individual gradients are ever exposed, and the attestation proves the aggregation code is honest — not, say, memorising individual updates or leaking them to a third party.