Skip to content

HLM-Micro & HLM-Nano Model Zoo

A collection of polynomial-Hopfield checkpoints across two edge tiers, each trained on a different task, demonstrating that one architecture scales across modalities.

Access

Trained weights, training recipes, inference code, INT8 deployment sidecars, and the C inference kernel are available through the Early Access program or a commercial engagement. This page lists what's in the zoo and where each checkpoint deploys.

Released — HLM-Micro tier

Edge / MCU-class (ESP32-S3 class hardware, sub-MB flash, hundreds of mW). All four share the same architecture — only the input stem, feature layout, and classifier head differ per task.

ModelTaskTraining dataClassesAccuracyModel card
hlm-micro-anomaly-v0Multimodal industrial anomalysynthetic381.2% @ T=5card
hlm-micro-gesture-v06-class IMU gesturesynthetic6100% (synth ceiling)card
hlm-micro-ecg-v04-class ECG anomaly (not a medical device)synthetic483.6%card
hlm-micro-har-v06-class human activityUCI HAR (real, 30 subjects)689.75% testcard

The HAR entry is the one to look at first — it's on real human data (not synthetic) and it matches the published TinyML baseline range of 85–95% on the same benchmark.

Released — HLM-Nano tier

Ultra-edge (Arduino Uno, nRF52, RP2040, STM32 low-power Cortex-M). Three preset sizes at KB-scale INT8 footprints — sizes where HLM-Micro won't fit.

ModelTarget hardwareFootprintAccuracy
hlm-nano-tiny-v0Arduino Uno (2 KB SRAM), ATtinysmallest100% (synth)
hlm-nano-small-v0nRF52832 (64 KB SRAM)small100% (synth)
hlm-nano-micro-v0RP2040, nRF52840, STM32L4micro100% (synth)

The three presets all train to 100% on the same synthetic 3-class sensor-event task (still / periodic / impact). The task is deliberately simple — at this scale the claim is "we have polynomial-Hopfield models that fit in KB, not MB", not single-task benchmark dominance. Real-world deployment would retrain per target application.

Architecture detail on the HLM-Nano page.

In development

Training scripts and data loaders are written and tested in dry-run; kickoff pending source-dataset ingestion or partner target selection.

ModelDatasetStatus
hlm-micro-keyword-v0Google Speech Commands v2Training recipe ready
hlm-micro-mimii-v0MIMII industrial-machinery acousticTraining recipe ready
hlm-micro-radioml-v0DeepSig RadioML 2018.01aTraining recipe ready
hlm-nano-keyword variantGoogle Speech Commands v2, 12-classScaffold ready

Audit certificate on every checkpoint

Every zoo model supports the hash-chain audit trail out of the box — one call to generate a certificate, one call to replay-verify. The certificate binds (input, weights hash, per-layer basin trajectory, output) via SHA-256, is ~650 bytes as JSON or ~20 bytes as a bare digest, and costs sub-millisecond per inference.

Under Early Access or commercial engagement, this is delivered as a dependency of the shipped checkpoint — no retraining, no separate model, no post-hoc approximation. See Why HLM gives provable interpretability for the mechanism and EU AI Act compliance for the legal framing.

Runtime compute dial — measured on hlm-micro-anomaly-v0

Same weights, user-selectable at load time:

ModeTAccuracyLatency (CPU, 1 inference)
Emergency142.67%0.30 ms
Deploy (default)363.33%0.80 ms
Quality581.17%1.28 ms

The T dial is an architectural property, not a feature that needs per-task tuning. Every zoo model has it. For a battery-powered sensor node, T=1 during return-to-base and T=5 during anomaly response is a natural deployment pattern.

Comparison to TinyML

The HLM-Micro zoo is not meant to win on raw single-task accuracy — hand-tuned TinyML models specialised to one task will beat a general-purpose HLM-Micro on that task. The pitch is the combination of properties no TinyML model offers:

  • Per-inference cryptographic audit trail
  • Runtime compute dial on the same weights
  • Single-checkpoint multimodal (text + audio + sensor via modality tag)
  • Post-deployment concept editing via small OTA basin patches
  • EU-sovereign IP stack

On UCI HAR (the only head-to-head comparable entry today), HLM-Micro at 89.75% is within the TinyML baseline range — losing by ~5% on raw accuracy, gaining those five properties.

How models are named

hlm-micro-<task>-<size> / hlm-nano-<preset>-<size>:

<size>Meaning
v0Current release — CPU-trainable reference checkpoint
fullESP32-S3 target (Micro) per HLM-Micro spec
xlStretch hardware (Alif Ensemble / K230 / BL808)

v1 progression will run v0full once the INT4 QAT pipeline matures for each task.

License

Zoo weights and reference code, once access is granted: BSL 1.1 (Apache 2.0 in 2030), consistent with the rest of the qriton-hlm platform. Training datasets carry their own licenses — UCI HAR requires attribution to Anguita et al. 2013; MIMII is CC-BY; RadioML is distributed under DeepSig's research license; synthetic-data models are free-use.