Skip to content

Sync

Nara uses a unified event store and gossip protocol to share information across the network. This document explains how naras discover what’s happening, form opinions, and stay in sync.

Imagine a nara coming online after being offline for a while. It’s like someone returning from vacation:

  1. Says hello publicly (plaza broadcast) - “hey everyone, I’m back!”
  2. Asks friends privately (mesh DMs) - “what did I miss? give me the info dump”
  3. Forms own opinion from gathered data - personality shapes how they interpret events

The nara network is a collective hazy memory. No single nara has the complete picture. Events spread organically through gossip. Each nara’s understanding of the world (clout scores, network topology, who’s reliable) emerges from the events they’ve collected and how their personality interprets them.

┌─────────────────────────────────────────────────────────────────┐
│ SyncLedger (Event Store) │
│ │
│ Events: [observation, checkpoint, ping, social, ...] │
│ │
└─────────────────────────────────────────────────────────────────┘
┌─────────┼─────────┬──────────┐
▼ ▼ ▼ ▼
┌──────────┐ ┌──────┐ ┌─────────┐ ┌─────────┐
│ Clout │ │ RTT │ │ Restart │ │ Uptime │
│Projection│ │Matrix│ │ Count │ │ Total │
└──────────┘ └──────┘ └─────────┘ └─────────┘

The SyncLedger is the unified event store. It holds all syncable events regardless of type. Projections are derived views computed from events - like clout scores (who’s respected), the RTT matrix (network latency map), or restart counts (checkpoint + unique StartTimes).

Observation Events (service: "observation")

Section titled “Observation Events (service: "observation")”

Network state consensus events - replace newspaper broadcasts for tracking restarts and online status:

  • restart: Detected a nara restarted (StartTime, restart count)
  • first-seen: First time observing a nara (seeds StartTime)
  • status-change: Online/Offline/Missing transition

These events use importance levels (1-3) and have anti-abuse protection (per-pair compaction, rate limiting, deduplication). Critical for distributed consensus on network state.

Social interactions between naras:

  • tease: One nara teasing another (for high restarts, comebacks, etc.)
  • observed: One nara reporting a tease they saw elsewhere
  • observation: Legacy system observations (online/offline, journey events)
  • service: Helpful actions (like stash-stored) that award clout to the actor
  • gossip: Hearsay about what happened

Network latency measurements:

  • observer: Who took the measurement
  • target: Who was measured
  • rtt: Round-trip time in milliseconds

Ping observations are community-driven. When nara A pings nara B, that measurement spreads through the network. Other naras can use this data to build their own picture of network topology.

Multi-party attested historical snapshots:

  • subject: Who the checkpoint is about
  • subject_id: Nara ID (for indexing)
  • as_of_time: When the snapshot was taken
  • observation: Embedded NaraObservation containing:
    • restarts: Historical restart count at checkpoint time
    • total_uptime: Verified online seconds at checkpoint time
    • start_time: When network first saw this nara (FirstSeen)
  • round: Consensus round (1 or 2)
  • voter_ids: Nara IDs who voted for these values
  • signatures: Ed25519 signatures from voters

Checkpoints anchor historical data that predates event-based tracking. They require multiple voters for trust. Restart count is derived as: checkpoint.Observation.Restarts + count(unique StartTimes after checkpoint). Checkpoints are pruned when their subject is a ghost nara (offline 7+ days for established naras, 24h for newcomers), but checkpoints where a ghost nara was only a voter/emitter are kept.

Events flow through different channels depending on their nature:

The public square - everyone sees these messages.

  • nara/plaza/hey_there - announcing presence
  • nara/plaza/chau - graceful shutdown
  • nara/plaza/journey_complete - journey completions

Private point-to-point communication for catching up.

  • POST /sync - request events from a neighbor
  • Used for boot recovery (catching up on missed events)
  • More efficient than broadcast for bulk data

Status broadcasts - current state, not history.

  • nara/newspaper/{name} - periodic status updates
  • Contains current flair, buzz, coordinates, etc.

Hand-to-hand event distribution - the underground press.

A zine is a small batch of recent events (~5 minutes worth) passed directly between naras via mesh HTTP. Like underground zines passed hand-to-hand at punk shows, these spread organically through the network without central coordination.

Every 30-300 seconds (personality-based):
1. Create zine from recent events
2. Pick 3-5 random mesh neighbors
3. POST /gossip/zine with your zine
4. Receive their zine in response (bidirectional!)
5. Merge new events into SyncLedger

Why Zines?

  • O(log N) bandwidth: Epidemic spread instead of O(N²) broadcast
  • Decentralized: No MQTT broker bottleneck
  • Redundant paths: Multiple naras carrying same news
  • Organic propagation: Events spread like rumors, not announcements

Separate distribution system - encrypted state backup with commitment model.

Stash is encrypted arbitrary JSON data that naras store for each other based on mutual promises. Stash distribution happens via HTTP mesh with its own trigger system:

Distribution triggers (finding confidants):
1. Immediate: When stash data is updated (via /api/stash/update)
2. Periodic: Every 5 minutes (maintenance + health checks)
3. Reactive: When a confidant goes offline (immediate replacement)
When distributing (to find confidants):
1. Pick best nara first (high memory + uptime)
2. Pick remaining confidants randomly (avoid hotspots)
3. POST /stash/store with encrypted stash to each
4. Peer verifies signature & timestamp
5. Peer checks capacity:
- If space: Accepts (creates commitment) ✓
- If full: Rejects (at_capacity) ✗
6. If accepted: Peer tracks commitment, owner adds to confirmed confidants
7. Keep trying until target count (3) reached or max attempts
On boot (recovery):
1. Owner broadcasts hey-there event (MQTT)
2. Confidants detect and wait 2s
3. POST /stash/push owner's stash back via HTTP
4. Owner receives, decrypts, uses newest

Why Separate Stash System?

  • Clear semantics: Store, retrieve, push, and delete are distinct operations
  • Timestamp security: All requests signed with timestamp (replay protection)
  • No disk writes: Pure memory storage (ephemeral by design)
  • Commitment-based: Accept/reject model (not LRU cache)
  • Mutual promises: Both sides know who’s storing what
  • Health monitoring: Owners detect offline confidants, find replacements
  • Ghost pruning: Evict stashes for naras offline 7+ days
  • Memory-aware: Storage limits based on memory mode (5/20/50)
  • Hybrid selection: 1 best confidant (reliability) + 2 random (distribution)
  • Immediate triggers: Reacts to stash updates and offline confidants within seconds
  • Boot recovery: Hey-there events trigger HTTP stash push back

Integration with mesh:

  • Stash uses same HTTP mesh infrastructure as zines
  • Same Ed25519 authentication mechanism
  • Independent distribution system (not tied to zine gossip)
  • Complements event sync with state backup
  • Unlike events (spread to all), stash is targeted (3 confidants)

Transport Modes:

Naras can operate in different modes, like preferring different social networks:

  • MQTT Mode (Traditional): Newspapers broadcast to all via MQTT plaza

    • High bandwidth but guaranteed delivery
    • Good for small networks (<100 naras)
  • Gossip Mode (P2P-only): Zines spread hand-to-hand via mesh

    • Logarithmic bandwidth scaling
    • Requires mesh connectivity
    • Best for large networks (>1000 naras)
  • Hybrid Mode (Default): Both MQTT and Gossip simultaneously

    • MQTT for discovery + time-critical announcements
    • Gossip for bulk event distribution
    • Most resilient option

Applications stay transport-agnostic:

// Publishing - same code regardless of transport
network.local.SyncLedger.AddEvent(event)
// Subscribing - events arrive via MQTT or gossip, app doesn't care
events := network.local.SyncLedger.GetEvents()

The transport layer automatically picks up events from SyncLedger and spreads them. Apps never call transport-specific functions like publishToMQTT() or gossipEvent().

Mixed networks work seamlessly:

  • MQTT-only naras can coexist with gossip-only naras
  • Hybrid naras bridge the two worlds
  • Events deduplicated automatically (same event via multiple paths)

It’s like some people use Twitter (MQTT - broadcast to all), some use Mastodon (gossip - federated P2P), and some use both - but they all see the same posts (SyncEvents).

Not all data flows through all channels. Understanding what goes where is critical for network resilience.

These fields are broadcast via nara/newspaper/{name} and are NOT in the event store or zines. If MQTT stopped, this data would be lost:

FieldTypeDescription
TrendstringCurrent trend name (e.g., “robin-style”)
TrendEmojistringTrend emoji (e.g., ”🔥“)
HostStats.Uptimeuint64System uptime in seconds
HostStats.LoadAvgfloat64System load average
HostStats.MemAllocMBuint64Current heap allocation in MB
HostStats.MemSysMBuint64Total memory from OS in MB
HostStats.MemHeapMBuint64Heap memory (in use + free) in MB
HostStats.MemStackMBuint64Stack memory in MB
HostStats.NumGoroutinesintActive goroutines
HostStats.ProcCPUPercentfloat64CPU usage of this process (%)
FlairstringDerived status indicator
LicensePlatestringVisual identifier
Chattinessint64Posting frequency preference
BuzzintCurrent activity level
PersonalitystructAgreeableness, Sociability, Chill (0-100)
VersionstringSoftware version
PublicUrlstringPublic HTTP endpoint
CoordinatesstructVivaldi network coordinates
TransportModestring”mqtt”, “gossip”, or “hybrid”
EventStoreTotalintTotal events in the local event store
EventStoreByServicemapEvent counts per service (social, ping, observation, etc.)
EventStoreCriticalintCount of critical events
StashStoredintNumber of stashes stored for others
StashBytesint64Total bytes of stash data stored
StashConfidantsintNumber of confidants storing my stash

Newspapers are current state snapshots, not history. They answer “what is this nara like right now?” rather than “what happened?”

Stash metrics help monitor distributed storage health - who’s storing what, capacity usage, and confidant network status.

These survive in the distributed event log and spread via zine gossip:

ServiceKey DataPurpose
chauFrom, PublicKeyIdentity/discovery
observationStartTime, RestartNum, LastRestart, OnlineStateNetwork state consensus
pingObserver, Target, RTTLatency measurements
socialActor, Target, ReasonTeases, helpful services, and interactions
seenObserver, Subject, ViaLightweight presence detection

Events are state transitions - they record what happened, not current state.

  1. Trend tracking requires MQTT: No events for trend join/leave. To track trends historically, we’d need to add trend events.

  2. Host metrics are ephemeral: Uptime and load only exist in the moment. No historical record.

  3. Personality is broadcast, not recorded: If you miss a newspaper, you don’t know a nara’s personality until the next broadcast.

  4. Coordinates require newspapers: Vivaldi coordinates only spread via status broadcasts.

In gossip-only mode (no MQTT), naras discover each other by scanning the mesh network:

Every 5 minutes:
1. Scan mesh subnet (100.64.0.1-254)
2. Try GET /ping on each IP
3. If successful, decode {"from": "nara-name", "t": timestamp}
4. Add discovered nara to neighborhood with mesh IP
5. Mark as ONLINE in observations

Why IP scanning?

  • No dependency on MQTT for discovery
  • Works in pure P2P networks
  • Automatically finds new naras joining the mesh
  • Minimal overhead (1 scan per 5 minutes)

Discovery flow:

  1. Nara A boots in gossip-only mode
  2. After 35 seconds, runs initial mesh scan
  3. Discovers naras B, C, D via /ping responses
  4. Adds them to neighborhood with mesh IPs
  5. Starts gossiping zines with discovered neighbors
  6. Periodic re-scans every 5 minutes to find new peers

Note: In hybrid mode, MQTT handles discovery and gossip is used only for event distribution. Discovery scans only run in pure gossip mode.

Nara uses three complementary sync mechanisms that form a layered system. Each serves a different purpose and operates at different frequencies:

MechanismFrequencyTime WindowPurpose
Boot RecoveryOnce at startupAll available (up to 10k events)Catch up after being offline
Zine GossipEvery 30-300sLast 5 minutesRapid organic event propagation
Background SyncEvery ~30 minLast 24 hoursFill gaps from personality filtering

Each mechanism handles a different failure mode:

  1. Boot Recovery solves the cold-start problem. A nara waking up after hours or days needs bulk data fast—10,000 events from multiple neighbors, interleaved to avoid duplicates.

  2. Zine Gossip provides continuous, low-latency propagation. Events spread epidemically (O(log N) hops to reach all naras) without central coordination. But zines only carry the last 5 minutes of events, so they can’t recover from longer outages.

  3. Background Sync acts as a safety net. Personality filtering means some naras drop events they find uninteresting. A high-chill nara might ignore a tease, but that tease could be important context for clout calculations. Background sync queries specifically for observation events (restarts, first-seen, status-change) with importance ≥2, ensuring critical events survive personality filtering.

Boot:
└─→ Boot Recovery (bulk sync from neighbors)
└─→ Zine Gossip starts (every 30-300s based on chattiness)
└─→ Background Sync kicks in (every ~30 min)

At 5000 nodes:

MechanismNetwork LoadNotes
Boot RecoveryBurst at startup~10k events per booting nara
Zine Gossip~83 KB/s totalO(log N) epidemic spread
Background Sync~250 req/min~1 request per nara every 6 min

Compare to the old newspaper broadcast system: 68MB/s - 1GB/s at scale.

ScenarioWhich Mechanism Helps
Nara offline for hoursBoot Recovery
Network partition healsBackground Sync
Missed event due to personality filterBackground Sync
Real-time event propagationZine Gossip
New nara joins networkBoot Recovery + Mesh Discovery

The sync API supports three retrieval modes designed for different use cases. Each mode reflects a different relationship with the network’s collective memory.

Mode: sample (Boot Recovery - Organic Memory)

Section titled “Mode: sample (Boot Recovery - Organic Memory)”

Philosophy: Each nara returns a decay-weighted sample of their memory, not everything they know. This creates a hazy collective memory where recent events are clearer and old events naturally fade.

{
"from": "requester",
"mode": "sample",
"sample_size": 5000
}

How sampling works:

  • Recent events more likely to be included (clearer memory)
  • Old events less likely but not zero (fading memory)
  • Critical events (checkpoints, hey_there) always included
  • Events the nara emitted or observed firsthand have higher weight

Decay function: Exponential decay with ~30-day half-life

  • Events < 1 day old: ~100% inclusion probability
  • Events ~1 week old: ~80% inclusion probability
  • Events ~1 month old: ~50% inclusion probability
  • Events ~6 months old: ~10% inclusion probability

When to use: Boot recovery, where incomplete but representative data is acceptable.

Philosophy: Deterministic, cursor-based pagination for complete retrieval. Returns events oldest first so pagination works correctly.

{
"from": "requester",
"mode": "page",
"page_size": 2000,
"cursor": "1704067200000000000"
}

Response includes:

{
"events": [...],
"next_cursor": "1704167200000000000",
"from": "responder",
"sig": "..."
}

Pagination flow:

  1. First request: cursor omitted or empty
  2. Process events, save next_cursor
  3. Next request: use saved cursor
  4. Repeat until next_cursor is empty (no more events)

When to use: Backup (need ALL events), checkpoint sync (need complete checkpoint history).

Philosophy: Simple retrieval of most recent N events. No pagination needed.

{
"from": "requester",
"mode": "recent",
"limit": 100
}

When to use: Web UI event browsing, recent activity feeds.

Use CaseModeWho to AskCompleteness
Boot RecoverysampleAll available neighborsIntentionally lossy (hazy memory)
Checkpoint Syncpage5 neighbors (redundancy)Complete for checkpoints
BackuppageALL narasComplete union
Web UIrecentSelfRecent subset

For backward compatibility, requests without a mode field continue to work using the legacy parameters (slice_index, slice_total, max_events). These are deprecated and will be removed in a future version.


When a nara boots, it reconstructs its memory by asking neighbors: “What do you remember?”

The number of API calls is determined by the nara’s memory capacity:

Memory ModeCapacityPage SizeAPI Calls
Short~5k1k5 calls
Normal~50k5k10 calls
Hog~80k5k16 calls

Algorithm:

1. Announce presence on plaza (hey_there)
2. Discover available mesh neighbors
3. Calculate: calls_needed = my_capacity / page_size
4. Distribute calls across ALL available neighbors (round-robin)
5. Each call: mode="sample", sample_size=page_size
6. If a call fails, retry with a different neighbor
7. Continue until calls_needed successful fetches
8. Merge all events into SyncLedger

Examples:

  • Hog mode (16 calls), 10 neighbors → each neighbor gets 1-2 calls
  • Hog mode (16 calls), 3 neighbors → each neighbor gets ~5 calls
  • Short mode (5 calls), 20 neighbors → 5 different neighbors each get 1 call

Each neighbor returns their perspective - a decay-weighted sample that includes:

  • Events they emitted (strong memory)
  • Events they observed firsthand (strong memory)
  • Recent events (clear memory)
  • Old events (fading, probabilistic inclusion)
  • Critical events like checkpoints (always included)

The collective memory emerges:

  • Events appearing in multiple samples = strong consensus (many naras remember)
  • Events appearing in one sample = weak memory (only one neighbor remembers)
  • Events in no samples = forgotten (and that’s OK)

This is intentional. The network doesn’t maintain perfect state - it maintains organic, living memory that naturally fades and reconstructs based on who you talk to.

After Boot: Background Sync (Organic Memory Strengthening)

Section titled “After Boot: Background Sync (Organic Memory Strengthening)”

Once a nara is online, it watches events in real-time via MQTT plaza. However, with personality-based filtering and hazy memory, important events can be missed. Background sync helps the collective memory stay strong.

Schedule:

  • Every ~30 minutes (±5min random jitter)
  • Initial random delay (0-5 minutes) to spread startup
  • Query 1-2 random online neighbors per sync

Focus on Important Events:

{
"from": "requester",
"services": ["observation"], // Observation events only
"since_time": "<24 hours ago>",
"max_events": 100,
"min_importance": 2 // Only Normal and Critical
}

This lightweight sync helps catch up on critical observation events (restarts, first-seen) that may have been dropped by other naras’ personality filters.

Network Load (5000 nodes):

  • 250 sync requests/minute network-wide
  • ~1 incoming request per nara every 6 minutes
  • ~20KB payload per request
  • Total: 83 KB/s (vs 68MB/s - 1GB/s with old newspaper system)

Why it’s needed:

  1. Event persistence: Critical events survive even if some naras drop them
  2. Gradual propagation: Events spread organically through repeated syncs
  3. Personality compensation: High-chill naras catch up on events they filtered
  4. Network healing: Partitioned nodes eventually converge

Note: Interleaved slicing is deprecated in favor of the mode-based API. New code should use mode: "sample" for boot recovery. The slicing parameters (slice_index, slice_total) remain for backward compatibility but will be removed in a future version.

Legacy slicing divided events across neighbors to avoid duplicates:

Neighbor 0 (slice 0/3): events 0, 3, 6, 9, 12...
Neighbor 1 (slice 1/3): events 1, 4, 7, 10, 13...
Neighbor 2 (slice 2/3): events 2, 5, 8, 11, 14...

The new sample mode replaces this with probabilistic sampling that naturally handles deduplication through the merge process.

Sync responses are cryptographically signed to ensure authenticity:

type SyncResponse struct {
From string `json:"from"` // Who sent this
Events []SyncEvent `json:"events"` // The events
Timestamp int64 `json:"ts"` // When generated
Signature string `json:"sig"` // Ed25519 signature
}

Signing: The responder hashes (from + timestamp + events_json) and signs with their private key.

Verification: The receiver looks up the sender’s public key (from Status.PublicKey) and verifies the signature before merging events.

This prevents:

  • Impersonation (can’t fake being another nara)
  • Tampering (can’t modify events in transit)

To prevent the event store from being saturated with stale ping data while keeping useful history, we limit pings to 5 per target (receiver) (configurable via MaxPingsPerTarget).

When adding a new ping from A→B:

  • If A→B has fewer than 5 entries, add it
  • If A→B already has 5 entries, evict the oldest and add the new one
  • This keeps recent history for trend detection

This keeps the ping data diverse across the network:

  • 5 naras = max 100 ping entries (5 per pair × 20 pairs)
  • 100 naras = max ~50,000 ping entries
  • 5000 naras = bounded by ledger max (50k events) and time-based pruning

When a nara restarts or receives ping observations from neighbors, it seeds its exponential moving average (AvgPingRTT) from historical ping data:

  1. On boot recovery: After syncing events from neighbors, calculate average RTT from recovered ping observations
  2. During background sync: When receiving ping events from neighbors, recalculate averages for targets with uninitialized AvgPingRTT
  3. Only if uninitialized: Seeding only happens when AvgPingRTT == 0 (never overwrites existing values)

This provides immediate RTT estimates without waiting for new pings, improving Vivaldi coordinate accuracy and proximity-based routing from the moment a nara comes online.

The observation event system includes four layers of protection against malicious or misconfigured naras:

Purpose: Prevent one hyperactive observer from saturating storage

  • Maximum 20 observation events per observer→subject pair
  • Oldest events dropped when limit exceeded
  • Example: If alice has 20 observations about bob, adding a 21st evicts the oldest

Purpose: Block burst flooding attacks

  • Maximum 10 events about same subject per 5-minute window
  • Blocks malicious nara claiming restart every second
  • Example: After 10 “bob restarted” events in 5 minutes, further events rejected
  • Window slides forward automatically

Purpose: Prevent redundant storage when multiple observers report same event

  • Hash restart events by (subject, restart_num, start_time)
  • Multiple observers reporting same restart = single stored event
  • Keeps earliest observer for attribution
  • Example: 10 naras report “lisa restarted (1137)” → stored once

Purpose: Ensure critical events survive longest

  • Global ledger pruning respects importance levels:
    1. Drop Casual (importance=1) first
    2. Drop Normal (importance=2) second
    3. Keep Critical (importance=3) longest
  • Restart and first-seen events marked Critical
  • Survives global MaxEvents pruning

At 5000 nodes with 50 abusive naras flooding events:

  • Layer 2 blocks flood after 10 events/5min per subject ✓
  • Layer 1 limits each attacker to 20 events per victim ✓
  • Layer 3 deduplicates coordinated attack ✓
  • Layer 4 preserves critical events under pressure ✓

Result: Network remains functional with 1% malicious nodes

The sync system is designed to scale:

  1. Boot-time sync only: No ongoing sync overhead after startup
  2. Embrace incompleteness: No one has all events, and that’s OK
  3. Recency over completeness: Recent events matter more
  4. Diversify sources: Query multiple peers for different perspectives
  5. Self-throttling: Ping budget doesn’t grow with network size

While the SyncLedger stores events neutrally, each nara interprets them subjectively based on personality:

Not all events are meaningful to every nara. When adding social events, personality determines what gets stored:

  • High Chill (>70): Ignores random jabs
  • Very High Chill (>85): Only keeps significant events (comebacks, high-restarts)
  • High Agreeableness (>80): Filters out “trend-abandon” drama
  • Low Sociability (<30): Less interested in others’ drama

Clout scores are subjective - the same events produce different clout for different observers:

clout := ledger.DeriveClout(observerSoul, observerPersonality)

The TeaseResonates() function uses the observer’s soul to deterministically decide if a tease was good or cringe. Same event, different reactions.

Events fade over time, but personality affects memory:

  • Low Chill: Holds grudges longer (up to 50% longer half-life)
  • High Chill: Lets things go faster
  • High Sociability: Remembers social events longer

Separate from subjective clout, the tease counter is an objective metric:

counts := ledger.GetTeaseCounts() // map[actor]int

This simply counts how many times each nara has teased others. No personality influence - pure numbers. Useful for leaderboards and identifying the most active teasers.

Nara A pings Nara B, measures 42ms RTT
A's SyncLedger: [ping: A→B, 42ms]
Nara C does mesh sync with A
C's SyncLedger: [ping: A→B, 42ms]
Nara D does mesh sync with C
D's SyncLedger: [ping: A→B, 42ms]
...eventually reaches most naras

The measurement spreads organically. Different naras may receive it at different times. That’s fine - eventual consistency is the goal.

Request events from a neighbor using the mode-based API.

Request (mode-based - recommended):

{
"from": "requester-name",
"mode": "sample", // "sample", "page", or "recent"
// For "sample" mode (boot recovery):
"sample_size": 5000,
// For "page" mode (backup, checkpoint sync):
"page_size": 2000,
"cursor": "1704067200000000000", // omit for first request
// For "recent" mode (web UI):
"limit": 100,
// Optional filters (all modes):
"services": ["social", "ping"],
"subjects": ["nara-a", "nara-b"]
}

Request (legacy - deprecated):

{
"from": "requester-name",
"services": ["social", "ping"],
"subjects": ["nara-a", "nara-b"],
"since_time": 1704067200,
"slice_index": 0,
"slice_total": 3,
"max_events": 2000
}

Response:

{
"from": "responder-name",
"events": [...],
"next_cursor": "1704167200000000000", // only for "page" mode
"ts": 1704067260,
"sig": "base64-ed25519-signature"
}

Mode-specific behavior:

  • sample: Returns decay-weighted random sample, max sample_size events
  • page: Returns oldest events after cursor, max page_size events, includes next_cursor
  • recent: Returns newest events, max limit events

Lightweight latency probe for Vivaldi coordinates.

Response:

{
"t": 1704067260,
"from": "responder-name"
}

Checkpoints are created through MQTT-based consensus rather than HTTP endpoints:

Topics:

  • nara/checkpoint/propose - Nara broadcasts proposal about itself
  • nara/checkpoint/vote - Other naras respond with their observations

Flow:

  1. Proposer broadcasts proposal with their values
  2. Voters respond within 5-minute window (APPROVE or REJECT)
  3. If consensus reached → checkpoint event created and broadcast
  4. If not → Round 2 with trimmed mean values
  5. Final checkpoint requires minimum 2 voters (outside proposer)