Writing from the Raucle team on AI security, provenance, and the protocols of verifiable AI.
Stop asking the LLM not to misuse its tools. raucle-detect v0.10.0 ships Ed25519-signed capability tokens — the tool refuses to execute unless the caller presents an unforgeable handle whose constraints match the call.
Every AI security product ships with "tested against 10,000 attacks." raucle-detect v0.9.0 ships the alternative — SMT-backed proofs that no tool-call, URL, or SQL query a bounded agent can emit will violate its policy.
A novel jailbreak found by one team at 3am should be blocked at every gateway by morning. No central authority. No API token. v0.8.0 ships the protocol — pin a key, subscribe in one line.
Invisible Unicode, ASCII art, OCR-bearing screenshots, PDF font streams. Four evasion classes, four detectors, one composable pipeline. raucle-detect v0.7.0 ships the multimodal layer.
Cryptographic chain-of-custody for AI was the foundation. Today we ship the feature it was always for — re-running any incident against an alternate guardrail policy and getting a verifiable answer.
Software supply chains figured out verifiable attestation. AI inference hasn't. Today we are publishing a draft standard, four MIT-licensed reference implementations, and an invitation.