Skip to content

HLM-Micro — Edge & MCU Variant

HLM-Micro is Qriton's polynomial-Hopfield variant designed from the ground up for microcontroller-class hardware: ESP32-S3, Alif Ensemble, Kendryte K230, BL808. Not a quantisation of HLM3 — a separate architecture that inherits the same multi-basin energy landscape but is purpose-built for sub-MB flash, sub-watt deployments.

Access

Trained weights and deployment artefacts are available through the Early Access program or a commercial engagement. This page describes what the architecture is and where it deploys.

Why a separate variant

HLM3 doesn't fit on an MCU, and simply quantising it down doesn't solve the problem. HLM-Micro inherits the polynomial d=3 Hopfield dynamics, drops the architectural pieces that don't earn their parameter cost at this scale, and adds features specific to the edge envelope:

  • Runtime compute dial — the same weights serve a low-power emergency mode, a default deploy mode, and a quality mode. User selects at inference time, no retraining.
  • Basin paging — hot-set basins resident in fast memory, warm in PSRAM, cold paged from QSPI flash. LRU eviction with async prefetch.
  • Multimodal single checkpoint — a short modality prefix lets one model handle text, audio, sensor, and vision-crop inputs through a shared 1D-conv stem.
  • Hash-chain audit trail — every inference emits a SHA-256 digest binding input, weights, and basin-ID trajectory. Tamper-evident, replay-verifiable.

Architecture at a glance

PropertyValue
Polynomial degree3 (shared with HLM3)
Param range1 – 5M
FFNnone — pure Hopfield blocks
Precision targetINT4 / INT8 stratified quantisation
Flash footprint targetsub-MB
Active-power targethundreds of mW
T dialthree operating modes

Exact architectural dimensions, precision stratification, and basin-paging details are shared with Early Access partners under the standard evaluation terms.

Status

v0 MVP validated. Multiple trained checkpoints across distinct tasks — industrial anomaly, gesture recognition, human activity, ECG anomaly. One architecture, different heads per task. See the Model Zoo for the published checkpoint set.

Headline result

On UCI HAR (30 subjects, subject-disjoint test split, real human IMU data), HLM-Micro reaches 89.75% test accuracy — in the published TinyML baseline range of 85–95% on the same benchmark. Full model card: hlm-micro-har-v0.

Deployment shape

Two supported patterns:

  1. Host-inference — MCU reads sensors and streams feature frames to a laptop / gateway that runs HLM-Micro and emits audit certificates. Simpler integration, full certificate support, good for pilots.
  2. On-MCU inference — model runs natively on ESP32-S3-class hardware. Custom C runtime with INT8 deployment. Partnership-level engagement.

Energy Language integration

HLM-Micro models use the same BasinSurgeon API as HLM3 / HLM-Spatial / HLM-Audio. The basin centroids are addressable; the Energy Language operations apply identically, just at smaller scale.

Intended workflow:

  1. Capture / edit concepts on an HLM-Micro model using Energy Language on a laptop
  2. Export the modified basin as a small OTA patch
  3. Push the patch to deployed MCUs over standard update channels
  4. Devices reload the affected basin without a full firmware reflash — concept edited in the field

Commercial positioning

HLM-Micro fits best where standard TinyML doesn't:

  • Regulated industrial sensors where audit-trail requirements rule out standard TinyML
  • Defense / critical-infrastructure edge nodes requiring EU-sovereign stack
  • Multimodal smart-building devices where one MCU handles audio + vibration + commands on one model
  • Medical wearables where regulator-facing explainability is part of the product

Not positioned for: consumer smart speakers, hobbyist IoT, or any environment where a general-purpose SBC is acceptable.

Contact us for partnership or pilot-project discussion.