AI Inference marketplace giving you a cryptographic proof you got exactly the model you paid for
InferenceProof solves a core AI trust problem: when you pay for Claude Opus, how do you know the provider didn’t serve you a cheaper model? We create cryptographic, on-chain proof for every AI inference. Each request verifies the model’s keccak256 hash against a blockchain registry, runs the inference inside a TEE, then submits an ECDSA-signed attestation to the Flare Coston2 blockchain, permanently recording the prompt hash, response hash, model hash, and timestamp. No party can forge this proof. The result is a trustless AI marketplace where model usage is auditable, payments are automatic via USDT smart contracts, and every inference is verifiable forever on-chain.
Built on three layers: a Next.js 16 inference server, a React 19 frontend, and Solidity smart contracts deployed on Flare Coston2 as the trust layer.
Two core Flare contracts power the system: ModelRegistry maps model names to keccak256 hashes, and InferenceGateway handles USDT0 deposits, ECDSA signature verification via ecrecover, and immutable proof storage on the Flare blockchain. Each inference costs 0.10 USDT, deducted atomically when the Flare contract accepts the proof.
The server checks the user’s on-chain balance, verifies the model hash against the registry, streams inference via OpenRouter (Claude Opus 4.6 / Gemini 2.5 Flash using Vercel’s AI SDK), then signs an attestation with the TEE key and submits it to Flare via Viem.
The frontend uses Wagmi + Reown AppKit for wallet connection and shadcn/ui components.
Notable hack: The TEE enclave is simulated with a server-side ECDSA keypair. In production this would be GCP Confidential Space (Intel TDX) with hardware vTPM attestation via flare-ai-kit. The contract logic is identical either way, letting us demo the full verifiable inference loop without TEE hardware.

