HLM-Micro & HLM-Nano Model Zoo
A collection of polynomial-Hopfield checkpoints across two edge tiers, each trained on a different task, demonstrating that one architecture scales across modalities.
Access
Trained weights, training recipes, inference code, INT8 deployment sidecars, and the C inference kernel are available through the Early Access program or a commercial engagement. This page lists what's in the zoo and where each checkpoint deploys.
Released — HLM-Micro tier
Edge / MCU-class (ESP32-S3 class hardware, sub-MB flash, hundreds of mW). All four share the same architecture — only the input stem, feature layout, and classifier head differ per task.
| Model | Task | Training data | Classes | Accuracy | Model card |
|---|---|---|---|---|---|
| hlm-micro-anomaly-v0 | Multimodal industrial anomaly | synthetic | 3 | 81.2% @ T=5 | card |
| hlm-micro-gesture-v0 | 6-class IMU gesture | synthetic | 6 | 100% (synth ceiling) | card |
| hlm-micro-ecg-v0 | 4-class ECG anomaly (not a medical device) | synthetic | 4 | 83.6% | card |
| hlm-micro-har-v0 ⭐ | 6-class human activity | UCI HAR (real, 30 subjects) | 6 | 89.75% test | card |
The HAR entry is the one to look at first — it's on real human data (not synthetic) and it matches the published TinyML baseline range of 85–95% on the same benchmark.
Released — HLM-Nano tier
Ultra-edge (Arduino Uno, nRF52, RP2040, STM32 low-power Cortex-M). Three preset sizes at KB-scale INT8 footprints — sizes where HLM-Micro won't fit.
| Model | Target hardware | Footprint | Accuracy |
|---|---|---|---|
| hlm-nano-tiny-v0 | Arduino Uno (2 KB SRAM), ATtiny | smallest | 100% (synth) |
| hlm-nano-small-v0 | nRF52832 (64 KB SRAM) | small | 100% (synth) |
| hlm-nano-micro-v0 | RP2040, nRF52840, STM32L4 | micro | 100% (synth) |
The three presets all train to 100% on the same synthetic 3-class sensor-event task (still / periodic / impact). The task is deliberately simple — at this scale the claim is "we have polynomial-Hopfield models that fit in KB, not MB", not single-task benchmark dominance. Real-world deployment would retrain per target application.
Architecture detail on the HLM-Nano page.
In development
Training scripts and data loaders are written and tested in dry-run; kickoff pending source-dataset ingestion or partner target selection.
| Model | Dataset | Status |
|---|---|---|
| hlm-micro-keyword-v0 | Google Speech Commands v2 | Training recipe ready |
| hlm-micro-mimii-v0 | MIMII industrial-machinery acoustic | Training recipe ready |
| hlm-micro-radioml-v0 | DeepSig RadioML 2018.01a | Training recipe ready |
| hlm-nano-keyword variant | Google Speech Commands v2, 12-class | Scaffold ready |
Audit certificate on every checkpoint
Every zoo model supports the hash-chain audit trail out of the box — one call to generate a certificate, one call to replay-verify. The certificate binds (input, weights hash, per-layer basin trajectory, output) via SHA-256, is ~650 bytes as JSON or ~20 bytes as a bare digest, and costs sub-millisecond per inference.
Under Early Access or commercial engagement, this is delivered as a dependency of the shipped checkpoint — no retraining, no separate model, no post-hoc approximation. See Why HLM gives provable interpretability for the mechanism and EU AI Act compliance for the legal framing.
Runtime compute dial — measured on hlm-micro-anomaly-v0
Same weights, user-selectable at load time:
| Mode | T | Accuracy | Latency (CPU, 1 inference) |
|---|---|---|---|
| Emergency | 1 | 42.67% | 0.30 ms |
| Deploy (default) | 3 | 63.33% | 0.80 ms |
| Quality | 5 | 81.17% | 1.28 ms |
The T dial is an architectural property, not a feature that needs per-task tuning. Every zoo model has it. For a battery-powered sensor node, T=1 during return-to-base and T=5 during anomaly response is a natural deployment pattern.
Comparison to TinyML
The HLM-Micro zoo is not meant to win on raw single-task accuracy — hand-tuned TinyML models specialised to one task will beat a general-purpose HLM-Micro on that task. The pitch is the combination of properties no TinyML model offers:
- Per-inference cryptographic audit trail
- Runtime compute dial on the same weights
- Single-checkpoint multimodal (text + audio + sensor via modality tag)
- Post-deployment concept editing via small OTA basin patches
- EU-sovereign IP stack
On UCI HAR (the only head-to-head comparable entry today), HLM-Micro at 89.75% is within the TinyML baseline range — losing by ~5% on raw accuracy, gaining those five properties.
How models are named
hlm-micro-<task>-<size> / hlm-nano-<preset>-<size>:
<size> | Meaning |
|---|---|
v0 | Current release — CPU-trainable reference checkpoint |
full | ESP32-S3 target (Micro) per HLM-Micro spec |
xl | Stretch hardware (Alif Ensemble / K230 / BL808) |
v1 progression will run v0 → full once the INT4 QAT pipeline matures for each task.
License
Zoo weights and reference code, once access is granted: BSL 1.1 (Apache 2.0 in 2030), consistent with the rest of the qriton-hlm platform. Training datasets carry their own licenses — UCI HAR requires attribution to Anguita et al. 2013; MIMII is CC-BY; RadioML is distributed under DeepSig's research license; synthetic-data models are free-use.