Skip to content

Qriton HLMProgram neural networks by shaping energy landscapes

"Energy minima as a programming language — in a completely new fashion." — John J. Hopfield, March 2026

Energy landscape with attractor basins

One Architecture. Every Modality.

Transformers need a different architecture for every problem — vision transformers, audio transformers, multimodal bridges. HLM uses a single architecture for everything: polynomial Hopfield layers.

The same layer that processes language also processes 3D point clouds, medical volumes, and audio waveforms. No modality-specific encoders. No projection bridges. One energy landscape, shared across all inputs.

Transformer

  • Knowledge distributed across billions of parameters
  • No discrete memory — can't point to where a fact lives
  • Modalities require separate encoders + alignment training
  • Edit one parameter, break everything

HLM

  • Knowledge stored as discrete attractor basins
  • Every concept has a location in the energy landscape
  • All modalities converge to the same basins naturally
  • Surgical edits to individual basins, nothing else touched

Learn more about HLM architecture →

Program. Don't Retrain.

Every AI framework has one way to change model behavior: training. Qriton HLM adds a second — surgery.

hlm:model> capture 5 polite Thank you so much for your help
  Captured L5 → concept 'polite' (1 samples)

hlm:model> inject-concept 5 polite 0.1
  Before: 200 basins | After: 201 basins (+1)
  >> Concept successfully injected!

hlm:model> apply 5
hlm:model> generate Tell me about the weather
  I'd be happy to share! The weather today is...

32 operations. Observe the landscape, modify basins, capture semantic concepts, verify the results. All in milliseconds on a laptop.

Explore the full language →

Basins, Not Predictions

A transformer knows that "the cat sat on the ___" is likely followed by "mat." It learned a statistical correlation. It has no representation of cats or mats that exists independent of the token sequence.

An HLM stores attractors. The basin for "cat" is a stable state that the network converges to from many different inputs — text, 3D scans, audio. The basin is the concept. It exists whether or not a specific input is present.

Show HLM a picture of a cat and the word "cat" — both converge to the same basin. Not because of contrastive training, but because the energy landscape has a natural attractor that both modalities fall into.

This is what building a world model looks like: stable, addressable, composable concepts grounded across modalities.

Read about world models →

Training at Scale

HLM3
Language

Text understanding and generation with surgically editable knowledge.

HLM-Spatial
3D Perception

LIDAR, Medical3D, Industrial3D — point clouds and volumetric data.

HLM-Audio
Speech

Speech-to-text and text-to-speech with programmable voice characteristics.

All models share the same polynomial Hopfield architecture. All are programmable with Energy Language. Currently training at scale — join the waitlist for early access.

Request early access →

What You Can Do

Observe

Survey basins. Measure energy. Probe what tokens a basin activates. Map the full landscape.

Modify

Inject new attractors. Remove unwanted ones. Move, strengthen, weaken — reshape the energy surface.

Concepts

Capture what "polite" looks like. Blend it with "technical." Export the concept. Import it into another model.

Verify

Generate text. Benchmark perplexity. Diff the weight matrix. Guard against over-modification. Full audit trail.

Basin surgery operations

Full operations reference →

Start in 30 Seconds

bash
pip install qriton-hlm
python
from qriton_hlm import BasinSurgeon

surgeon = BasinSurgeon.from_checkpoint("model.pt")
surgeon.survey(layer=0)
surgeon.inject(layer=0, seed=42, strength=0.1)
surgeon.verify(layer=0, seed=42)

CLI, Python API, Jupyter notebooks — pick the interface that fits your workflow.

Ready to program your neural networks?

HLM models are training at scale. For early access, custom training on proprietary data, or pilot projects — get in touch.

We're also open to research collaborations with universities and institutions working on energy-based models, associative memory, interpretability, or multimodal architectures. If you're exploring related directions, we'd like to hear from you.