Technology

$ coelanox --technology

technology

$ coelanox --technology

At its core, Coelanox is a compiler and a tiny VM for neural networks, built around a sealed container format.

The model is no longer a script or framework object. It becomes a standalone, compiled artifact you can ship and verify like a binary.

Three physical pieces: the Format (.cnox), the Packager, the Runtime. Together they behave like an AI unikernel.

pipeline — click a step

framework

coelanox.cnox

hardware target

PyTorch → framework graph → .cnoxCloud CPU. One sealed bottleneck between many frontends and many backends.

build-time

Train with your stack, freeze into a static graph.

You stay in PyTorch / TensorFlow / JAX to train. When you are happy, you export a frozen computation graph and weights.

deployment shapes — same .cnox

Baseline deployment: x86 nodes, no GPU.

Run a sealed .cnox on plain CPU instances. No Python, no Conda, no Docker in the inference path.

$coelanox run model.cnox --backend cpu

1. The Format .cnox

.cnox

A proprietary, cryptographically sealed binary. Think .exe or .app, but exclusively for AI.

Inside the single file:

  • The model's neural network graph → 51 basic math instructions
  • The model's weights (compressed)
  • Pre-compiled hardware machine code (SIMD, GPU kernels)
  • A rigid execution plan (exact memory addresses)

2. The Packager

packager

Offline compiler/linker. Translator and linker.

  • Ingests standard models (e.g. ONNX)
  • Strips framework dependencies
  • Converts math into Universal IR
  • Embeds raw vendor machine code (CLF blobs) into the .cnox payload

3. The Runtime

runtime

Lightweight, Turing-incomplete execution engine. No web server. No network stack.

Given a .cnox file and input floats, it does exactly three things:

  1. Verify — SHA-256 hash check (tamper detection)
  2. Mapmmap embedded machine code into executable memory; strict code/data separation (Harvard Architecture)
  3. Execute — Walk the execution plan step-by-step; input numbers → machine code → output logits

Simulate runtime phases:

CLF: sealed low-level libraries inside the container

Mode: hybrid — container brings kernels + executor, host owns allocation and protection.

presets:

CLF is open source: spec, reader, packer, and op-id registry on GitHub

system properties

safety, runtime, ops

safety & security

Every .cnox has a SHA-256 over the entire payload embedded in its header. Verify-before-run is the default: if the hash does not match, the runtime refuses to execute.

summary

summary

Coelanox is an AI Unikernel: bare-metal systems architecture that removes the OS, Docker, and Python from the AI inference path. A neural network becomes nothing but raw, sealed, executable math.

> contact