$ cat FAQ

faq

Frequently Asked Questions

Answers to common questions about Coelanox, the .cnox format, and the CLI.

results

q

What is Coelanox?

Coelanox is a custom compiler and specialized virtual machine for running neural networks. You get a sealed .cnox container (model + runtime) and a tiny binary—no Python, no Docker, no OS in the inference path. We call it an AI unikernel: verify, map, execute.

q

What is a .cnox file?

A .cnox file is a Coelanox container: your trained model (e.g. ONNX) packaged with metadata, optional machine code, and everything the runtime needs to run inference. It is self-contained and designed for deployment on bare metal or edge devices.

q

How do I package a model?

Use the CLI: coelanox package -i model.onnx -o out.cnox. You can set target (cpu, gpu, edge), optimization level, and optional flags like --fat or --auto. Try the in-browser CLI on the homepage or run coelanox help package for full options.

q

What does the Coelanox CLI do?

The CLI (coelanox) lets you package, validate, verify, inspect, run, and benchmark .cnox containers. Commands include package, validate, verify, info, run, benchmark, env (system info), debug, and extract (standalone runtime binary). Use coelanox --help on the site or in your terminal.

q

Do I need Python or Docker to run inference?

No. Coelanox is designed so the inference path has no Python, no Docker, and no full OS stack. You run a small runtime (or coelanox run) against a .cnox file. For deployment you can extract a standalone coelanox-run binary and ship it with your .cnox files only.

q

What is CLF?

CLF (Coelanox Load Format) is our open-source project for backend discovery and loading. It is used under the hood by the toolchain. You can find the repo and docs on the Technology page and GitHub.

q

Where can I see the technical pipeline and roadmap?

The Technology page describes the pipeline (train → export → package → verify → run), deployment targets, and stack. For roadmap and release updates, check the changelog on the homepage or contact us.

q

How do I validate or verify a .cnox file?

coelanox validate -f model.cnox checks that the container is well-formed. coelanox verify -f model.cnox checks integrity (e.g. before deployment). Both are in the CLI; you can try them in the browser terminal on the homepage.

q

Can I run Coelanox on ARM or edge?

Yes. The CLI supports targets like cpu, gpu, and edge. coelanox extract can produce a standalone runtime for the same or a different architecture (e.g. ARM64) so you deploy a single binary plus .cnox files.

q

Does Coelanox provide pre-trained models?

Coelanox is a compiler and runtime for your models, not a model zoo. You bring trained models (e.g. ONNX), package them with coelanox package, and run or deploy the resulting .cnox containers.

q

How do I get support or get in touch?

Use the Contact page for inquiries, support, or partnership. We respond as quickly as we can.

q

How can I try the CLI without installing?

The homepage has an in-browser terminal that mocks the full Coelanox CLI. Try coelanox --help, coelanox env, coelanox validate -f model.cnox, and other commands to see the interface and sample output.

← home