Skip to main content

About Chimera Protocol

The deterministic safety layer
for production AI agents.

AI agents reason in natural language. The consequences are real money, real emails, real production code. Chimera is the deterministic layer that sits between an agent and its tools, so "mostly safe" becomes "provably safe."

What we ship

Three products. One safety stack.

Chimera Protocol is a coherent system: a policy language for expressing what agents are allowed to do, a runtime that enforces those policies in production, and a scanner that tells you what your agent does today before you deploy it.

CSL Core

Constraint Specification Language

A policy language for AI agents. Write what an agent is allowed to do (which tools, which arguments, under which conditions) and Chimera enforces it deterministically. Formal semantics; verifiable in TLA+; portable across runtimes.

Chimera Runtime

Deterministic policy enforcement

The runtime sits between your agent and its tools, evaluates every tool call against your CSL policy, and allows or denies the call before any side-effect runs. Multi-tenant; SDK-friendly; signed audit trail per request.

Visit Runtime

AgentScanner

Adversarial security scanner

You're here. Point AgentScanner at any AI agent repo and we run 14 adversarial patterns against a shadow copy, prove which ones reach your real tools, and hand back a signed evidence chain. Free shadow scans; Local + AWS deploys for production parity.

Run a free scan

Why now

The agent
doesn't care
what was tested.

In 2024 nobody put an LLM in front of their payment system. In 2025 every startup deck has an "agent layer." In 2026 the agent is the integration. It reads a customer email, decides what to do, and pulls the lever.

The lever can be a transfer, a deletion, an email to ten thousand customers, a deploy to production. The agent does not know which lever is dangerous. Frontier models will pull the lever in three out of three runs at temperature 0 if the prompt is shaped right. We have the receipts.

Eval suites tell you the model passed 87% of the time on a benchmark. Production security needs the other 13%. Probabilistic safety is not safety. Chimera Protocol is the deterministic floor underneath it.

How the pieces fit

Scan, write a policy, enforce.

AgentScanner produces the threat model. CSL is where you encode the policy. Runtime is where the policy runs. Each product stands on its own; together they close the loop between "we found a bypass" and "the bypass is blocked in production."

1

Scan

AgentScanner finds the bypasses in your agent before an attacker does. Signed evidence per finding.

2

Encode

Every confirmed bypass ships with a CSL policy fragment that closes it. Paste, review, commit.

3

Enforce

Chimera Runtime evaluates every tool call against your policy. Bad calls are denied before side-effects fire.

Who built this

One operator, end to end.

Chimera Protocol is built by Aytug Akarlar, solo founder. Computer science from Imperial College London (top of cohort). Adversarial robustness research against frontier LLMs published in 2026. Background in quant trading systems where determinism is non-negotiable.

The thesis behind Chimera is that AI safety in production cannot be a probability distribution. It has to be a hard boundary the model never crosses. CSL specifies the boundary. Runtime enforces it. AgentScanner proves it works. One coherent stack, written by one person who has held it all in their head.

Ready to see your agent under attack?

Paste a GitHub URL. Free shadow scan, 5 minutes, signed evidence.