Use Case: Industrial Anomaly Detection on the Factory Floor
End-to-end edge-AI pattern that reads vibration from a machine, classifies its state (normal / warning / fault) on a microcontroller, and emits a cryptographically replayable audit certificate per classification.
Access
Trained checkpoints, reference firmware, and host-integration code are available through the Early Access program or a commercial engagement. This page describes the deployment pattern — the shipping artefacts are private.
Who this is for
- Manufacturing data engineers who own a production line and want predictive-maintenance without a cloud dependency
- Maintenance ops leads who need an audit trail for every "replace this bearing" decision
- Systems integrators building monitoring into new equipment who need an alternative to per-machine TinyML models
The problem you're solving
Standard predictive-maintenance today forces a hard choice:
| Option | What breaks |
|---|---|
| Cloud inference on streaming data | Bandwidth cost, latency, vendor lock-in, no audit trail |
| TinyML per-sensor-per-task | N models × N devices × N OTA update pipelines; no modality fusion; no audit |
| Big edge box (Jetson / SBC) | Power + cost per sensor; serviceability; overkill for vibration classification |
What's missing is a €5-class MCU that can fuse multimodal sensor input (vibration + acoustic + text command), expose a runtime compute dial so it can run on battery during power outages, and produce an audit trail that a compliance team or insurer can verify.
What a deployment looks like
A working pilot typically covers:
- An HLM-Micro model classifying sensor windows into
normal / warning / fault - Either a host-inference pattern (MCU streams feature frames to a gateway running HLM-Micro) or a native-MCU pattern (ESP32-S3-class runtime, partnership engagement)
- An audit certificate saved for every classification, replay-verifiable by a third party
- A runtime T-dial showing emergency / deploy / quality latency-accuracy tradeoff on the same weights
The stack
| Piece | Status |
|---|---|
| HLM-Micro architecture | HLM-Micro model page |
Pre-trained checkpoint (anomaly-v0) | Model Zoo — Early Access |
| Audit certificate library | Bundled with commercial release |
| Reference ESP32 firmware + host adapter | Partnership-level engagement |
| Your hardware | Any ESP32-S3-class MCU + IMU / accelerometer / acoustic front-end |
Pattern overview
1. Classify
HLM-Micro consumes fixed-length sensor windows (IMU / acoustic / raw ADC) and returns a class label + per-basin confidence + convergence metadata. The same weights serve three operating modes via the runtime T-dial — no retraining, user selects at inference time:
- Emergency (T=1) — battery-outage / fail-safe mode, ~4× faster, coarser accuracy
- Deploy (T=3) — default production mode
- Quality (T=5) — on-demand "explain this event", tightest convergence, full trajectory
2. Certify
Every inference can emit a hash-chain audit certificate binding:
- the input sensor window,
- the model weights hash,
- the per-layer basin trajectory,
- the output class.
The certificate is ~650 bytes JSON (or ~20 bytes as a bare digest). Generation cost: sub-millisecond.
3. Deploy
Two supported deployment topologies:
- Host-inference — MCU reads sensors and streams features to a laptop / gateway / RPi-class box that runs HLM-Micro. Full certificate support. Good for pilots and audit-heavy regulated settings. Simplest integration.
- On-MCU native — model runs in a custom C runtime on ESP32-S3-class hardware. INT8 deployment. Lower BOM, higher integration effort. Partnership-level engagement.
4. Replay-verify (the audit story)
Imagine you flagged a bearing for replacement. Maintenance lead asks "prove the AI actually produced this classification on this specific reading." You hand over:
- the sensor window,
- the certificate JSON,
- the model weights hash.
Any third party with access to the model weights can replay inference and cryptographically verify the certificate. If the sensor data was altered, the weights were swapped, or the classification claim was fabricated — verification FAILS. That's the chain-of-custody primitive no TinyML model today offers.
Caveats
- Public checkpoints are synthetic-data proofs of pipeline. Real-plant deployment needs dataset-specific retraining. Commercial engagement covers this.
- Native on-MCU inference is a partnership-level delivery. Host-inference is the default pilot shape.
- Audit certificate scope. The certificate proves the trajectory happened on the claimed weights for the claimed input. It does not prove the classification was clinically or legally correct. It's a notarisation primitive, not a quality guarantee.
Related
- HLM-Micro model page — architecture details, hardware targets
- Model Zoo — pre-trained checkpoints + availability
- Sovereign edge sensor networks — scaling this pattern across dozens of nodes