Tutorial 5: From Rights to Runtime: Engineering Trustworthy, Compliant Agentic AI

Speakers

  • Keivan Navaie (Lancaster University and Scientific Advisor to the Alan Turing Institute)

 

Abstract

Agentic AI is shifting production stacks from request–response to plan–execute: systems now plan tasks, call tools, keep memory, and change external state. That shift moves privacy and safety from policy documents into the runtime. This tutorial shows AI practitioners how to render GDPR and AI Act obligations as testable product behaviors—short, visible memory; purpose-aware egress gates; proportional safeguards that scale with stakes; and end-to-end traces for auditability. We bridge legal requirements to build artifacts engineers already manage (tool registries, policy-as-code, retention jobs, human-in-the-loop (HITL) gates), so governance travels with the agent.

Attendees will leave with: (i) a reference architecture for agentic stacks with privacy/safety touchpoints; (ii) a minimal policy-driven egress gate; (iii) a deletion-cascade pattern for embeddings, logs, and partner systems; and (iv) a mapping from GDPR/AI Act duties to runtime controls and service-level tests. No legal background is required; we assume familiarity with foundation models and MLOps. Live demos instrument a working agent to show how controls block over-collection, enforce region/transfer rules, and produce evidence for audits and Data Subject Access Requests (DSARs) in minutes rather than weeks.

 

Target Audience

Designed for AI practitioners and engineers deploying frontier/agentic systems. No prior mathematics or legal training is required. Assumes working familiarity with foundation models, orchestration frameworks (e.g., LangGraph, DSPy, custom planners), and MLOps/CI/CD and observability practices. Cohort size: up to 50 participants, space permitting. Hands-on demos included.

 

Outline and Description of the Tutorial

  1. Why rights must run at runtime. Failure modes; framing; DSAR/HITL definitions.
  2. Agentic architectures & data flows. Planner–model–memory–tool gateway; ingress/egress; role shifts; artifact: role map.
  3. Duties → build artifacts. Purpose limitation → egress policy; storage limitation → TTLs/retention; accountability → trace + ROPA links.
  4. Four design patterns (live demos). Memory governance; purpose-aware egress gate; proportional safeguards; end-to-end traceability; metrics dashboard.
  5. Shrinking the agent’s data trail. Six engineering habits; artifact: “delete-this-run” DAG.
  6. Guided exercise. Teams draft egress policy + retention plan for a provided use case; peer review checklist.
  7. Wrap-up & Q&A. Checklist, take-home templates, research gaps.

Demos are offline-capable with seeded data; animated fallbacks provided.

Learning outcomes

By the end, participants will be able to:

  1. Diagram agentic data flows and allocate controller/processor roles per step (validated by worksheet).
  2. Implement a deny-by-default egress gate with purpose/necessity/region checks (validated by policy snippet).
  3. Configure bounded memory and execute deletion cascades across transcripts, embeddings, caches, and partner stores (validated by DAG review).
  4. Instrument execution traces that satisfy accountability/logging duties (validated by trace schema).
  5. Set risk-tiered HITL thresholds and monitor SLOs for runtime governance (validated by metric targets).

Demo plan (tools & materials)

  • Demo 1: Policy-as-code egress gateway protecting tool calls; show allow/deny with reason, redaction, geofencing, and transfer-tool tagging.
  • Demo 2: Deletion cascade across memory layers with receipts; “purge-on-restore” for backups; TTL drift monitor.
  • Demo 3: Human-readable activity log + machine-verifiable trace joined to a tool registry; DSAR export in seconds.

All demos ship with synthetic data; one-click rollback for failure injection; slides include short GIF captures of each demo.

 

Reading List (pre-/post-tutorial)

Primary (author’s works):

  • Navaie, From Rights to Runtime: Privacy Engineering for Agentic AI, 2026 in press
    (ACM AI Magazine; design patterns, trace schema, metrics).
  • Navaie, Agentic AI’s Hidden Data Trail—And How to Shrink It, IEEE Spectrum, 2025,
    to appear 22/10/2025.
  • Navaie, Engineering GDPR Compliance in the Age of Agentic AI (IAPP Analysis;
    purpose locks, live role mapping, DSAR/trace linkage).

Core frameworks & guidance:

  • EU AI Act (Regulation (EU) 2024/1689), Official Journal (12 July 2024).
  • EDPB Guidelines 07/2020 on controller/processor; final version (2021).

Supplemental technical security references (optional):

  • OWASP GenAI/LLM Top 10 (prompt-injection, insecure output handling, etc.).
  • NIST AI 100-2 (2025) adversarial ML taxonomy; align runtime mitigations with threat classes.

 

Vertical

Generative AI Models, AI in Education and Agentic AI

 

Timeline

2 hours