AgentGuard

mathematical models to provide probabilistic guarantees about an AI agent's behavior as it runs

AgentGuard

Created At

ETHGlobal Cannes 2026

Project Description

AgentGuard learns the system on the fly. It updates its model and state space dynamically (e.g., every 5 iterations or transactions) and feeds this data into our model checker to verify behavior against specific safety thresholds. For example, the system can determine if a specific path will lead to success or if the transaction should be preemptively terminated.

How it's Made

this is an implementation of the research paper "AgentGuard: Runtime Verification of AI Agents" (arXiv:2509.23864). Unlike other security tools with similar names, this project is a specialized framework for Runtime Verification (RV) that uses mathematical models to provide "probabilistic guarantees" about an AI agent's behavior as it runs.

The system is built as a non-intrusive inspection layer that sits between an AI agent (e.g., built with AutoGen or LangGraph) and its environment.

AgentGuard is built around a paradigm called Dynamic Probabilistic Assurance (DPA). Instead of just blocking keywords, it builds a mathematical model of what the agent is doing in real-time.

background image mobile

Join the mailing list

Get the latest news and updates

AgentGuard | ETHGlobal