Skip to content

Custom Persona

Blend multiple concepts to create a custom model personality.

Capture Base Concepts

python
surgeon = BasinSurgeon.from_checkpoint("hlm3-model.pt", device="cuda")

# Polite
for text in [
    "Thank you so much for your help",
    "I really appreciate your patience",
    "That is very kind of you",
]:
    surgeon.capture(layer=5, text=text, concept_name="polite")

# Technical
for text in [
    "The algorithm converges in O(n log n)",
    "The gradient is computed via backpropagation",
    "This uses a polynomial activation function",
]:
    surgeon.capture(layer=5, text=text, concept_name="technical")

# Concise
for text in ["Yes.", "Done.", "Correct."]:
    surgeon.capture(layer=5, text=text, concept_name="concise")

Blend

python
# 70% polite + 30% technical
surgeon.blend("polite", "technical", "polite_technical", ratio=0.7)

# 80% polite_technical + 20% concise
surgeon.blend("polite_technical", "concise", "professional", ratio=0.8)

Inject and Test

python
surgeon.inject_concept(layer=5, concept_name="professional", strength=0.1)
surgeon.apply(layer=5)

# Verify
result = surgeon.benchmark()
print(f"Perplexity: {result['perplexity']:.2f}")

# Test generation
surgeon.generate("Explain how neural networks learn")
surgeon.generate("What is the weather like today")

As an HLM Script

bash
# blend_persona.hlm
load model.pt

capture 5 polite Thank you so much
capture 5 polite I really appreciate it
capture 5 technical The algorithm converges in O(n log n)
capture 5 technical The gradient descent converges
capture 5 concise Yes.
capture 5 concise Done.

blend polite technical polite_technical 0.7
blend polite_technical concise professional 0.8

inject-concept 5 professional 0.1
apply 5
generate Explain how neural networks learn
restore 5

Run: qriton-hlm --script blend_persona.hlm