AI systems are making decisions and taking actions. Sooner or later, someone asks: show me the proof.
Example proof packs generated by the real Assay toolchain. Locally verifiable. No account, no platform access, no trust in the vendor required.
That question — can you prove it? — comes from customers, auditors, regulators, compliance teams, security reviewers, and internal leadership.
Most teams answer with logs on their own servers, screenshots, dashboards, policy documents, and selectively presented evidence. All of it depends on trusting the vendor.
The core problem: there is no artifact a company can hand over that an outsider can independently verify. A company says "our AI controls ran" but cannot produce proof that someone else can check.
Assay is an evidence compiler. It records the important execution events, checks, and decisions during an AI workflow and packages them into a proof pack — a small, portable, cryptographically signed folder that anyone can verify offline.
Assay doesn't make fraud impossible. It makes post-hoc tampering, silent weakening, and selective evidence presentation much harder to get away with:
Assay proves the evidence artifact has not been quietly changed after the fact. It does not, by itself, prove every upstream component was honest. Stronger deployment patterns — CI-held signing keys, transparency logs, external timestamping — raise the cost of full fabrication further.
Verify a real sample proof pack in your browser. No install. Nothing uploaded.
Client-side browser verification covers signed proof packs only. Reviewer packets remain CLI-only via assay reviewer verify.
A proof pack is a small signed folder created from an AI workflow execution. It is the thing a buyer, auditor, or reviewer can actually hold, inspect, forward, and verify.
If a file was changed, the fingerprint won't match. If someone edits the manifest to cover it, the signature breaks. That is how tampering is detected.
Seven public artifacts from the real toolchain: three proof-pack verdicts, one reviewer packet, one insurance vertical mapping, one MCP proxy scenario, and one customer-data-boundary tamper diagnostic.
source_index plus expected/actual hashes.
Bounded claim only: this observed evidence trail stayed inside this declared boundary, and later mutation is detectable.
pip install assay-ai
git clone https://github.com/Haserjian/assay-proof-gallery