Use Case: Sovereign Edge Sensor Network
Deploy dozens or hundreds of HLM-Micro sensor nodes across a facility or critical-infrastructure perimeter. Every node runs on a €5-class microcontroller, every detection is audit-chained, every concept update ships as a small OTA basin patch, and the whole stack runs on EU-sovereign IP without hyperscaler dependency.
Access
Reference firmware, gateway aggregation service, OTA basin-patch pipeline, and partner-specific integration work are delivered through a commercial / partnership engagement. This page describes the deployment pattern and what it earns you operationally.
Who this is for
- Defense / critical-infrastructure program managers needing edge sensing without US cloud dependency
- Systems integrators building perimeter-monitoring / ISR / acoustic-surveillance products
- Public-sector and critical-infrastructure buyers in the EU and allied jurisdictions where supply-chain independence is a procurement requirement
The problem you're solving
Current edge-sensor networks force one of three unacceptable choices:
| Choice | What breaks |
|---|---|
| Cloud-dependent (AWS / Azure IoT + ML backend) | Bandwidth, latency, sovereignty — hostile actor at the cloud layer breaks everything. Not usable in contested RF. |
| Traditional TinyML per-node | No audit trail. No multimodal fusion. OTA update = full reflash. Each node is a single-task siloed classifier. |
| Expensive SBCs per node | Power + cost per node. Serviceability nightmares. Overkill for what's often a single acoustic classifier. |
Contested-environment operation, bandwidth-denied settings, and audit-traceable edge AI are recurring operational requirements for critical-infrastructure and public-sector buyers — and the three force choices above all fail at least one of them.
What a deployment looks like
A multi-node sensor network where:
- Each node is an ESP32-S3-class MCU running HLM-Micro (€5-class hardware, sub-watt power envelope, sub-MB flash)
- Each node classifies locally — no cloud, no backhaul required for core operation
- Every classification emits a hash-chain audit trail (~20 bytes/inference) locally
- A gateway aggregates classifications with a regional chain of evidence — Merkle-style hash-linked log that a downstream auditor can verify cryptographically
- Concept updates (e.g. "start detecting this new drone signature") ship as small OTA basin patches — no full firmware reflash
This is the exact pattern we design around for critical-infrastructure and public-sector pilots.
The stack
| Piece | Delivery |
|---|---|
| HLM-Micro (pre-trained bases) | Model Zoo — Early Access |
| INT8 quantisation + basin paging | Partnership-level engagement for native ESP32-S3 runtime |
| Per-node audit digest | Bundled with commercial release |
| Gateway aggregation + regional audit log | Reference service delivered per-deployment |
| OTA basin-patch pipeline | Roadmap v1 — architecturally supported today |
How it works
1. Pick a vertical-specific base model
Pre-trained HLM-Micro checkpoints (Model Zoo) map to common defense-edge verticals:
| Vertical | Base model | Why |
|---|---|---|
| Acoustic perimeter / counter-UAS | Audio keyword / acoustic-anomaly base | Audio classification |
| Industrial monitoring | hlm-micro-anomaly-v0 | Multimodal sensor fusion |
| Activity / occupancy | hlm-micro-har-v0 (real UCI-HAR data) | IMU-based classification |
| RF signature / SIGINT preprocessing | Planned (I/Q sample base) | I/Q sample classification |
For the acoustic-perimeter scenario below, the base is an audio-classification HLM-Micro.
2. Fine-tune on your signatures of interest
If the canonical base doesn't cover your specific signatures (drone models, machinery types, etc.), retrain on your labelled data. Delivery-side tooling takes labelled clips, produces a deployable checkpoint, and signs the weights file. Retraining times are minutes on CPU for small class sets.
3. Deploy to nodes with audit-enabled inference
Each node runs the HLM-Micro firmware with your fine-tuned weights. The operational loop on a node is:
- Read a sensor window (audio / IMU / ADC).
- Run local classification at deploy-mode T.
- Compute a ~20-byte audit digest for that inference.
- If confidence exceeds the alert threshold, emit
(class, confidence, digest)to the gateway.
No cloud involved in the local decision path. The firmware and the C runtime for native on-MCU inference are delivered under the partnership engagement.
4. Gateway aggregation with a regional audit log
A gateway device (laptop / RPi / industrial mini-PC) collects events from all nodes and maintains an append-only regional audit log. The log is a Merkle-style hash chain: each new event's regional hash includes the previous regional hash. Any single tampered event breaks the chain forward of that point and is cryptographically detectable on replay.
Operationally this means:
- A downstream regulator / commander can verify integrity of the entire regional log with a single hash check.
- An internal review can identify where in the timeline any tamper occurred.
- Claims of "this detection was made at this node at this timestamp" become binding evidence rather than hand-waving.
5. Replay-verify an incident
A claimed detection is challenged ("did node 12 at 03:14 Z actually detect a drone?"). The review:
- Retrieve the original sensor window (if preserved — storage policy decision).
- Retrieve the node-level audit digest from the regional log.
- Retrieve the deployed model weights hash.
- Re-run inference on any device with the same weights.
- Verify the digest matches → detection is cryptographically confirmed (or not).
This is the same verification primitive as industrial edge and medical imaging, applied at sensor-network scale.
6. OTA concept updates as basin patches
A new drone model is observed in-theatre. Rather than full firmware reflash to every node:
- Add the new class to your labelled dataset.
- Retrain HLM-Micro with the new class.
- Diff the resulting weights against the deployed version — identify the specific basins that changed.
- Export those basins as a small binary patch.
- Push the patch to nodes via standard OTA channels.
- Nodes reload the affected basins live — no reboot, no downtime.
The patch size is small enough for LoRa-class long-range radio. This is operationally different from any commodity edge-AI stack today — and it's the main reason the pattern fits contested-environment deployment profiles.
Caveats & what to read next
- On-MCU native inference is partnership-level today. The default pilot shape is host-inference (laptop / gateway runs the model; nodes stream features). Native ESP32-S3 runtime ships under partnership engagement.
- Certificate mechanism works today in host-inference. Moving it to on-MCU native is an engineering port — straightforward, scoped under the partnership.
- OTA basin-patch pipeline is architecturally supported but not shipped. The architecture supports it (basins are addressable); the productised patch-build + signed-OTA tooling is on the roadmap.
- Sovereignty disclosure: Qriton is an EU-incorporated company (Romania). All IP is EU-controlled. Training uses EuroHPC Leonardo BOOSTER (Italian national compute). No US hyperscaler dependency at any layer of the stack.
Related
- HLM-Micro model page
- HLM-Nano model page — smaller nodes aggregating through a Micro gateway
- Model Zoo — pre-trained base models
- Industrial edge deployment — single-node pattern; this page generalises it to many nodes
- EU AI Act compliance — the civilian-sector analogue of this audit pattern