Skip to content

Use Case: Wearable Health Monitoring

Run HLM-Micro on a wrist-worn or body-mounted device to classify human activities + detect basic cardiac anomalies, with a per-inference audit trail that supports regulator-facing explainability for future clinical deployment.

Access

Pre-trained HLM-Micro checkpoints, wearable-integration reference code, and clinical-pathway tooling are available through Early Access or a commercial engagement.

Who this is for

  • Consumer wearable PMs adding multi-activity / health-monitoring features to watches or fitness bands
  • Health-tech engineers integrating lightweight ML into medical-grade hardware (pulse oximeters, ECG patches, fall-detection pendants)
  • Research groups doing longitudinal wearable studies who want per-subject behaviour auditing

The problem you're solving

Consumer wearables today typically host small per-task ML models — one for step-counting, another for HR, another for sleep. Adding audit / explainability means either:

  1. Punt to cloud (wearable sends data, cloud answers) — kills battery life and privacy.
  2. Ship larger models (TinyLlama / Phi-mini on more expensive SoCs) — doesn't fit typical wearable BOM.
  3. Accept no audit trail — fine for consumer fitness, blocks any medical-adjacent claims.

HLM-Micro addresses (3) without violating (1) or (2): a single sub-1 MB model handles multiple tasks (activity + basic cardiac rhythm + fall-like events) and emits a per-inference audit chain on-device.

What a deployment looks like

A wearable or phone-paired device that:

  1. Classifies 6 activity states via 6-axis IMU (walking / stairs up / stairs down / sitting / standing / laying) — real-data HAR base, test accuracy in the published TinyML baseline range
  2. Classifies 4 cardiac states via 1-lead ECG-like signal (normal / brady / tachy / arrhythmia) — synthetic-only today, real-data v1 planned
  3. Emits an audit certificate for every classification (~20 B digest; suitable for constrained radio links)
  4. Runs the same checkpoint at T=1 for 24/7 low-power streaming, T=5 when user taps "explain this event" — no retraining, no second model

The stack

PieceDelivery
hlm-micro-har-v0 (real UCI-HAR data)Model Zoo — Early Access
hlm-micro-ecg-v0Model Zoo — synthetic only today
Per-inference hash chainBundled with commercial release
Wearable platformYour existing — phone, watch, dedicated MCU, etc.
IMU sensorAny 6-axis (MPU6050 / BMI270 / etc.) — reference firmware under partnership
ECG front-endHardware-specific (AD8232-class analog or specialised ECG AFE)

How it works

1. Classify activity + cardiac state

A single HLM-Micro checkpoint per modality runs on the wearable or its paired host. The model consumes fixed-length sensor windows and returns a class label + confidence + convergence metadata. Combined inference cost stays well under 10 ms on modest MCU-class hardware.

2. The T-dial battery story

Two deployment modes on the same weights — no retraining, selected at inference time:

  • Steady-state (T=1) — 24/7 low-power continuous monitoring. Coarser accuracy, but good enough for "am I still walking?" Battery lasts days.
  • Anomaly escalation (T=3 → T=5) — when T=1 confidence drops or an anomaly scores high, the same weights re-run at higher T for 30 s to tighten the classification. Full trajectory recorded.
  • User-requested (T=5) — user taps "show me" → full-quality inference + audit certificate rendered in the app.

This is the operational shape a wearable BOM cannot achieve with a fixed-compute TinyML model.

3. Pairing activity recognition with ECG anomaly

On a wearable with both IMU and ECG sensors, two HLM-Micro models run concurrently — same architecture, different weights, different modalities. Combined inference cost remains sub-10 ms.

Cross-signal event rules can fire from the combined output. E.g. tachycardia while laying can trigger a quality-mode re-run and escalate for user attention. The audit certificate binds the sensor window, weights, and output of each inference — the escalation has a verifiable provenance.

4. Regulator / clinician audit trail

When a cardiac event is flagged and a physician reviews the wearable log:

  1. Wearable stores the sensor window + certificate digest at the time of the alert.
  2. Physician app imports the alert + cert + weights hash.
  3. Physician (or their tooling) runs the replay-verification — if PASS, the alert is cryptographically tied to the model's actual inference, not a silently corrupted or tampered-with reading.

This becomes the foundation for a future MDR/FDA submission pathway — not because the certificate makes the model medically valid, but because it makes the AI component's behaviour replayably verifiable in post-incident review.

  • Not a medical device today. hlm-micro-ecg-v0 is synthetic-data only; clinical claims are out of scope. Consumer wellness messaging (steps, activity, resting HR) is reasonable; cardiac diagnosis is not.
  • Real ECG data is the next milestone. MIT-BIH Arrhythmia + PhysioNet datasets are the obvious upgrade — real-data retraining is v1 roadmap.
  • Per-user calibration matters. Production wearables typically calibrate per user on first wear; this isn't built into the current zoo models.
  • Hardware integration is your work. We ship the model + audit chain; you supply the wearable, the sensor front-end, and the app.