Dynamic resource allocation for symbol streams.
Attention is the mechanism by which a system with finite resources decides what to process. It's not a single function but an emergent property of cascading filters.
This library models attention as:
- Sequences - Bounded buffers of symbols that force prioritization
- Processes - Subscribers that match patterns and propagate associations
- Salience - Value functions that determine what persists
From first principles, consciousness can be modeled as:
"A very short sequence of symbols as the starting point... and a value assigning function as the main method. Every time an input stream produces a pattern, it is assigned a set of features and, if deemed relevant, it is appended to this sequence."
Attention emerges from:
- Multiple sequences with different capacities
- Processes that hash symbols and look up responses
- Reinforcement of repeated/convergent signals
- Graduation of high-value items to longer-term storage
pip install -r requirements.txtfrom attention import Sequence, Symbol, RepetitionProcess
# Create sequences with different capacities
conscious = Sequence("conscious", capacity=7, min_value=0.5)
working = Sequence("working", capacity=20, min_value=0.3)
# Process that detects repeated symbols
repeater = RepetitionProcess(
"repeat_detector",
inputs=[working],
outputs=[conscious],
min_repetitions=2,
boost_factor=1.5
)
repeater.start()
# Publish symbols
working.publish(Symbol(data="hello", value=0.4))
working.publish(Symbol(data="world", value=0.4))
working.publish(Symbol(data="hello", value=0.4)) # Repeat! Boosted to consciousThe atomic unit of attention. Carries data, a value (salience), and metadata.
symbol = Symbol(
data="the quick brown fox",
value=0.7,
metadata={"source": "user_input"}
)A bounded buffer with pub/sub semantics. When full, low-value items are evicted.
seq = Sequence(
name="working_memory",
capacity=20,
min_value=0.3,
eviction=EvictionPolicy.DROP_LOWEST
)
# Subscribe to new symbols
seq.subscribe("logger", lambda s: print(s.data))
# Publish
seq.publish(Symbol(data="important", value=0.9))Subscribes to sequences, matches patterns, publishes to other sequences.
class MyProcess(Process):
def match(self, symbol: Symbol) -> Optional[Match]:
if "urgent" in str(symbol.data):
return Match(
pattern_id="urgent",
symbol=symbol,
confidence=1.0,
response=symbol.boost(0.5)
)
return NoneBuilt-in processes:
- LookupProcess - Hash table matching
- RepetitionProcess - Detects repeated symbols
- ConvergenceProcess - Detects multi-source agreement
- LoopDetector - Breaks repetitive loops
Functions that compute attention-worthiness.
from attention import (
CompositeSalience,
recency_salience,
keyword_salience
)
salience = CompositeSalience([
recency_salience(half_life_seconds=30),
keyword_salience({"urgent": 0.3, "error": 0.4}),
], aggregation="max")
score = salience(symbol)Feed multimodal stream symbols into attention:
from full_duplex import GeminiStream, Symbol as DuplexSymbol
from attention import Sequence, Symbol
input_seq = Sequence("perception", capacity=50)
async for item in stream.receive():
if isinstance(item, DuplexSymbol):
input_seq.publish(Symbol(
data=item.data,
value=0.5, # or compute salience
metadata={"modality": item.modality.value}
))Use novelty scores as salience:
from attention import NoveltyAdapter
# Assuming novelty system provides this function
def compute_novelty(data) -> float:
# Returns 0-1 novelty score
...
salience_fn = NoveltyAdapter(compute_novelty, scale=1.0)Use tendency allocations as salience:
from attention import AllocationAdapter
adapter = AllocationAdapter(
get_allocations=lambda: world_model.agents.allocations(),
classify_fn=lambda data: classify_tendency(data)
) ┌─────────────┐
Input ──────────► │ Sequence │ ◄──────── Salience
│ (bounded) │ Function
└──────┬──────┘
│
┌────────────┼────────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Process │ │Process │ │Process │ (pattern matching)
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└────────────┼────────────┘
▼
┌─────────────┐
│ Sequence │
│ (output) │
└─────────────┘
Multiple processes can subscribe to the same sequence. When patterns match, they publish to output sequences. Convergent signals (multiple processes publishing similar symbols) indicate high salience.
The LoopDetector process watches for repetitive patterns:
loop_detector = LoopDetector(
"loop_break",
inputs=[conscious],
outputs=[interrupt_seq],
pattern_length=3,
max_repeats=2
)When a loop is detected, it emits a break signal that downstream processes can use to redirect attention.
"At the most basic level, intelligence is the ability to act in such a way that it increases your options for future action."
Attention serves intelligence by:
- Filtering noise (limited capacity forces selection)
- Reinforcing signal (repetition and convergence boost value)
- Breaking loops (detecting stuck patterns)
- Enabling association (processes link related concepts)
The system doesn't define what to attend to - that comes from salience functions you provide. It defines how attention flows through a cascade of prioritizing filters.
MIT