Coelanox · Invisible inference infrastructure

Your AI model is unauditable by design. We fixed that.

The coelacanth was thought extinct for millions of years—then found, still whole. Coelanox is built on the same idea: an invisible layer between packaging and execution. Nothing extra in the hot path—until you need proof the path never wandered.

PyTorch, TensorFlow, ONNX Runtime—they are general-purpose engines built for flexibility, not verifiability. You get a number back, not a tamper-evident record of every operation that produced it. Coelanox is a sealed binary runtime: cryptographically verified .cnox containers, a Turing-incomplete executor, and a minimal primitive opset so inference can be audited at the compute layer—not reconstructed after the fact. No Python. No Docker. No OS in the hot path.

  • 52 primitives cover most production graphs
  • SHA-256 verify before run; optional signing for provenance

Runtime model

Deterministic · sealed

Integrity

SHA-256 before execute

Executor

Turing-incomplete

Audit

Per-op logging (optional)

The framework problem isn't a bug. It's architectural.

Dynamic graphs, dispatchers, and runtime kernel selection are what make research fast—and what make "what exactly ran?" unanswerable in the general case.

General-purpose inference

  • Execution path decided at runtime by framework + backend + kernels
  • No single inspectable record of every op in order
  • Explainability tools interpret outputs—they don't audit the computation

Coelanox

  • Model ships as a sealed .cnox container—verify before a single op runs
  • Walk a fixed plan over a minimal opset; optional op-by-op audit trail
  • Built for environments where you must prove, not assert, what computed

A compliance enabler—not a checkbox product

We don't tell you what your model should do. We give your team the primitive to verify that it did exactly what it was supposed to—at the compute level.

Regulated deployment

Evidence of what computed—not just what was returned—matters for SaMD, EU AI Act-style audit pressure, and model risk management.

Air-gapped & offline

No Python runtime or framework in the hot path. Package the model, ship the binary, run anywhere the container is trusted.

Tamper-evident artifacts

Any change to weights, graph, or kernels is detectable before execution. Provenance can be cryptographic, not assumed.

BERT today, performance next

Scalar backend proves correctness and auditability first; SIMD and vendor backends are the next layer without breaking the audit story.

From framework export to provable inference

Train anywhere you like. Coelanox cares once there is a static graph and tensors to seal.

1

Export a frozen graph (e.g. ONNX) and compile into Universal IR inside a .cnox container.

2

Runtime verifies integrity (SHA-256); optional signatures bind provenance to the packager.

3

Executor walks the plan: Turing-incomplete kernel dispatch—no runtime codegen in the hot path.

4

With audit enabled: log ops, shapes, samples—forensics and regulatory questions get concrete answers.

Read the full technical thesis

Same narrative as our launch article: why frameworks are the wrong abstraction for auditability, how .cnox and CLF fit together, and what we're building next.

Try the CLI in your browser

Package, verify, run—the command surface evaluators use alongside the docs.

coelanox 0.6.0-beta — try in browser
Try: coelanox --help, coelanox package --help, coelanox --version
Or: coelanox validate -f model.cnox, coelanox env, coelanox run -f model.cnox -o out.json
$

Design partnerships · early access

We're talking to organisations with production inference in regulated environments—two years of roadmap influence and early access for partners who validate against real constraints. CLF spec and reader are on GitHub; runtime pilots go through contact.