Skip to content

Use Case: Medical Imaging with Provenance

How to run HLM-Spatial Medical3D for organ segmentation or anomaly detection with a cryptographically replayable audit trail per study, supporting clinical-AI regulator expectations and hospital liability workflows.

Access

Medical3D checkpoints, integration tooling, and audit-library delivery are covered under a commercial engagement. This page describes the mechanism.

Who this is for

  • Clinical-AI product leads building radiology or pathology tools for EU/UK hospitals
  • Regulatory affairs teams navigating MDR / IVDR / FDA 510(k) with AI components
  • Radiologists who want to verify "the AI actually ran on this study" before accepting its flags in reporting

The problem you're solving

Regulator-facing deployment of medical AI has three recurring failures:

  1. Surrogate explanations — SHAP / Grad-CAM / saliency maps are computed on an approximation of the production model. Regulator asks "is this saliency map faithful to the actual inference?" — the honest answer is "probably, but not provably."
  2. Audit fragility — hospital records often cannot re-run the same model state they used 6 months ago. Model drift + silent deployment changes break reproducibility.
  3. Liability overhang — when a missed finding lands in court, showing "AI was used" is easy; showing "AI was used correctly on this specific study with this specific model version" is currently hard.

What the integration delivers

A workflow where every AI finding (organ segmentation mask, anomaly detection, etc.) ships with a hash-chain audit certificate that a third party can replay-verify.

  1. HLM-Spatial Medical3D produces the segmentation mask + finding
  2. An audit certificate is generated binding (patient-study input hash, model weights hash, per-layer trajectory, output mask)
  3. The certificate is stored alongside the DICOM report — DICOM private tag or PACS sidecar file
  4. On audit or appeal, an auditor with the same weights file can re-run inference and verify the certificate cryptographically

The stack

PieceDelivery
HLM-Spatial (Medical3D checkpoints)HLM-Spatial — commercial engagement
Audit certificate libraryBundled with commercial release (same library as language-model use cases)
Your PACS / RIS / EHR integrationUnchanged — certificates attach as DICOM private-tag or sidecar file
Replay-verification clientSame library; runs anywhere the weights hash is accepted

How it works

1. Inference produces a mask + a certificate

The Medical3D model consumes a normalised volumetric tensor (CT / MR) and returns:

  • segmentation mask / anomaly map,
  • finding summary,
  • a certificate binding input, weights, trajectory, and output.

The certificate is ~650 bytes JSON.

2. Attach to the radiology report

Two canonical storage patterns:

  • DICOM private tag — certificate embedded in a vendor-defined DICOM tag. Invisible to non-HLM-aware viewers; accessible to compliance tooling.
  • PACS sidecar file — human-readable JSON next to the study ({study_uid}.qriton-cert.json). Easy for compliance ingestion, straightforward backup.

Both patterns preserve the DICOM primary record unchanged.

3. Replay-verify (auditor / radiologist / court of law)

A malpractice review asks "was the AI finding faithfully what the system reported?" The auditor needs three things:

  • the study volume (re-derived from PACS),
  • the certificate (from DICOM tag or sidecar),
  • the signed model-weights file (distributed separately with a verified hash).

They do not need access to the hospital production environment. They run the replay-verification routine against those three inputs. Four outcomes are possible:

OutcomeMeaning
All checks PASSThe report is cryptographically tied to the computation.
input_hash FAILSThe volume was modified between inference and archival.
weights_hash FAILSThe hospital claimed model vX but the actual inference was run on a different weights file.
basin_trajectory / final_logits FAILSThe trajectory or output recorded in the certificate does not match what the model actually produces — fabrication or silent model change.

Each of these is a distinct factual claim that an auditor can investigate separately. This is stronger than "the AI said so" hand-waving.

4. Integrate with hospital liability workflows

When an incident is reviewed — missed finding, disputed interpretation, etc. — the legal sequence becomes:

  • Hospital counsel produces the DICOM + certificate sidecar
  • Plaintiff's expert runs the verification routine independently with the same signed weights file
  • Either:
    • Certificate passes → factual record of what the AI produced is settled; argument moves to whether the finding was clinically acted on correctly
    • Certificate fails → something broke the chain; discovery focuses on which step

Without the certificate, the hospital's only defense is "here's our system log saying AI ran." That's evidence, but it's not proof.

  • Certificate does not make a clinical diagnosis correct. A model can faithfully produce a wrong answer; the certificate just proves the wrong answer was the answer the model actually produced. Clinical validity is a separate question.
  • Model-version distribution discipline is essential. The certificate binds a weights hash — if your release process doesn't produce a reliable hash-to-weights-file mapping, certificates are unverifiable.
  • HLM-Spatial Medical3D accuracy is per-dataset. Published Qriton results on synthetic benchmarks (segmentation / anomaly) are in the high-90s. Real-hospital deployment needs dataset-specific fine-tuning under a commercial engagement.
  • Not a medical device on its own. The certificate is a mechanism; regulatory clearance of the model is a separate workstream under MDR / FDA, and depends on clinical validation on real data.