Runtime model
Deterministic · sealed
Coelanox · Invisible inference infrastructure
The coelacanth was thought extinct for millions of years—then found, still whole. Coelanox is built on the same idea: an invisible layer between packaging and execution. Nothing extra in the hot path—until you need proof the path never wandered.
PyTorch, TensorFlow, ONNX Runtime—they are general-purpose engines built for flexibility, not verifiability. You get a number back, not a tamper-evident record of every operation that produced it. Coelanox is a sealed binary runtime: cryptographically verified .cnox containers, a Turing-incomplete executor, and a minimal primitive opset so inference can be audited at the compute layer—not reconstructed after the fact. No Python. No Docker. No OS in the hot path.
Runtime model
Deterministic · sealed
Integrity
SHA-256 before execute
Executor
Turing-incomplete
Audit
Per-op logging (optional)
Dynamic graphs, dispatchers, and runtime kernel selection are what make research fast—and what make "what exactly ran?" unanswerable in the general case.
We don't tell you what your model should do. We give your team the primitive to verify that it did exactly what it was supposed to—at the compute level.
Evidence of what computed—not just what was returned—matters for SaMD, EU AI Act-style audit pressure, and model risk management.
No Python runtime or framework in the hot path. Package the model, ship the binary, run anywhere the container is trusted.
Any change to weights, graph, or kernels is detectable before execution. Provenance can be cryptographic, not assumed.
Scalar backend proves correctness and auditability first; SIMD and vendor backends are the next layer without breaking the audit story.
Train anywhere you like. Coelanox cares once there is a static graph and tensors to seal.
Export a frozen graph (e.g. ONNX) and compile into Universal IR inside a .cnox container.
Runtime verifies integrity (SHA-256); optional signatures bind provenance to the packager.
Executor walks the plan: Turing-incomplete kernel dispatch—no runtime codegen in the hot path.
With audit enabled: log ops, shapes, samples—forensics and regulatory questions get concrete answers.
Same narrative as our launch article: why frameworks are the wrong abstraction for auditability, how .cnox and CLF fit together, and what we're building next.
Package, verify, run—the command surface evaluators use alongside the docs.
Try: coelanox --help, coelanox package --help, coelanox --version Or: coelanox validate -f model.cnox, coelanox env, coelanox run -f model.cnox -o out.json
We're talking to organisations with production inference in regulated environments—two years of roadmap influence and early access for partners who validate against real constraints. CLF spec and reader are on GitHub; runtime pilots go through contact.