Sync
Unified Sync Backbone
Section titled “Unified Sync Backbone”Nara uses a unified event store and gossip protocol to share information across the network. This document explains how naras discover what’s happening, form opinions, and stay in sync.
Mental Model: Waking Up From Holiday
Section titled “Mental Model: Waking Up From Holiday”Imagine a nara coming online after being offline for a while. It’s like someone returning from vacation:
- Says hello publicly (plaza broadcast) - “hey everyone, I’m back!”
- Asks friends privately (mesh DMs) - “what did I miss? give me the info dump”
- Forms own opinion from gathered data - personality shapes how they interpret events
The nara network is a collective hazy memory. No single nara has the complete picture. Events spread organically through gossip. Each nara’s understanding of the world (clout scores, network topology, who’s reliable) emerges from the events they’ve collected and how their personality interprets them.
Architecture
Section titled “Architecture”┌─────────────────────────────────────────────────────────────────┐│ SyncLedger (Event Store) ││ ││ Events: [observation, checkpoint, ping, social, ...] ││ │└─────────────────────────────────────────────────────────────────┘ │ ┌─────────┼─────────┬──────────┐ ▼ ▼ ▼ ▼ ┌──────────┐ ┌──────┐ ┌─────────┐ ┌─────────┐ │ Clout │ │ RTT │ │ Restart │ │ Uptime │ │Projection│ │Matrix│ │ Count │ │ Total │ └──────────┘ └──────┘ └─────────┘ └─────────┘The SyncLedger is the unified event store. It holds all syncable events regardless of type. Projections are derived views computed from events - like clout scores (who’s respected), the RTT matrix (network latency map), or restart counts (checkpoint + unique StartTimes).
Event Types
Section titled “Event Types”Observation Events (service: "observation")
Section titled “Observation Events (service: "observation")”Network state consensus events - replace newspaper broadcasts for tracking restarts and online status:
- restart: Detected a nara restarted (StartTime, restart count)
- first-seen: First time observing a nara (seeds StartTime)
- status-change: Online/Offline/Missing transition
These events use importance levels (1-3) and have anti-abuse protection (per-pair compaction, rate limiting, deduplication). Critical for distributed consensus on network state.
Social Events (service: "social")
Section titled “Social Events (service: "social")”Social interactions between naras:
- tease: One nara teasing another (for high restarts, comebacks, etc.)
- observed: One nara reporting a tease they saw elsewhere
- observation: Legacy system observations (online/offline, journey events)
- service: Helpful actions (like
stash-stored) that award clout to the actor - gossip: Hearsay about what happened
Ping Observations (service: "ping")
Section titled “Ping Observations (service: "ping")”Network latency measurements:
- observer: Who took the measurement
- target: Who was measured
- rtt: Round-trip time in milliseconds
Ping observations are community-driven. When nara A pings nara B, that measurement spreads through the network. Other naras can use this data to build their own picture of network topology.
Checkpoint Events (service: "checkpoint")
Section titled “Checkpoint Events (service: "checkpoint")”Multi-party attested historical snapshots:
- subject: Who the checkpoint is about
- subject_id: Nara ID (for indexing)
- as_of_time: When the snapshot was taken
- observation: Embedded NaraObservation containing:
- restarts: Historical restart count at checkpoint time
- total_uptime: Verified online seconds at checkpoint time
- start_time: When network first saw this nara (FirstSeen)
- round: Consensus round (1 or 2)
- voter_ids: Nara IDs who voted for these values
- signatures: Ed25519 signatures from voters
Checkpoints anchor historical data that predates event-based tracking. They require multiple voters for trust. Restart count is derived as: checkpoint.Observation.Restarts + count(unique StartTimes after checkpoint). Checkpoints are pruned when their subject is a ghost nara (offline 7+ days for established naras, 24h for newcomers), but checkpoints where a ghost nara was only a voter/emitter are kept.
Transport Layer
Section titled “Transport Layer”Events flow through different channels depending on their nature:
Plaza (MQTT Broadcast)
Section titled “Plaza (MQTT Broadcast)”The public square - everyone sees these messages.
nara/plaza/hey_there- announcing presencenara/plaza/chau- graceful shutdownnara/plaza/journey_complete- journey completions
DMs (Mesh HTTP)
Section titled “DMs (Mesh HTTP)”Private point-to-point communication for catching up.
POST /sync- request events from a neighbor- Used for boot recovery (catching up on missed events)
- More efficient than broadcast for bulk data
Newspaper (MQTT Per-Nara)
Section titled “Newspaper (MQTT Per-Nara)”Status broadcasts - current state, not history.
nara/newspaper/{name}- periodic status updates- Contains current flair, buzz, coordinates, etc.
Zines (P2P Gossip)
Section titled “Zines (P2P Gossip)”Hand-to-hand event distribution - the underground press.
A zine is a small batch of recent events (~5 minutes worth) passed directly between naras via mesh HTTP. Like underground zines passed hand-to-hand at punk shows, these spread organically through the network without central coordination.
Every 30-300 seconds (personality-based): 1. Create zine from recent events 2. Pick 3-5 random mesh neighbors 3. POST /gossip/zine with your zine 4. Receive their zine in response (bidirectional!) 5. Merge new events into SyncLedgerWhy Zines?
- O(log N) bandwidth: Epidemic spread instead of O(N²) broadcast
- Decentralized: No MQTT broker bottleneck
- Redundant paths: Multiple naras carrying same news
- Organic propagation: Events spread like rumors, not announcements
Stash Exchange (HTTP State Backup)
Section titled “Stash Exchange (HTTP State Backup)”Separate distribution system - encrypted state backup with commitment model.
Stash is encrypted arbitrary JSON data that naras store for each other based on mutual promises. Stash distribution happens via HTTP mesh with its own trigger system:
Distribution triggers (finding confidants): 1. Immediate: When stash data is updated (via /api/stash/update) 2. Periodic: Every 5 minutes (maintenance + health checks) 3. Reactive: When a confidant goes offline (immediate replacement)
When distributing (to find confidants): 1. Pick best nara first (high memory + uptime) 2. Pick remaining confidants randomly (avoid hotspots) 3. POST /stash/store with encrypted stash to each 4. Peer verifies signature & timestamp 5. Peer checks capacity: - If space: Accepts (creates commitment) ✓ - If full: Rejects (at_capacity) ✗ 6. If accepted: Peer tracks commitment, owner adds to confirmed confidants 7. Keep trying until target count (3) reached or max attempts
On boot (recovery): 1. Owner broadcasts hey-there event (MQTT) 2. Confidants detect and wait 2s 3. POST /stash/push owner's stash back via HTTP 4. Owner receives, decrypts, uses newestWhy Separate Stash System?
- Clear semantics: Store, retrieve, push, and delete are distinct operations
- Timestamp security: All requests signed with timestamp (replay protection)
- No disk writes: Pure memory storage (ephemeral by design)
- Commitment-based: Accept/reject model (not LRU cache)
- Mutual promises: Both sides know who’s storing what
- Health monitoring: Owners detect offline confidants, find replacements
- Ghost pruning: Evict stashes for naras offline 7+ days
- Memory-aware: Storage limits based on memory mode (5/20/50)
- Hybrid selection: 1 best confidant (reliability) + 2 random (distribution)
- Immediate triggers: Reacts to stash updates and offline confidants within seconds
- Boot recovery: Hey-there events trigger HTTP stash push back
Integration with mesh:
- Stash uses same HTTP mesh infrastructure as zines
- Same Ed25519 authentication mechanism
- Independent distribution system (not tied to zine gossip)
- Complements event sync with state backup
- Unlike events (spread to all), stash is targeted (3 confidants)
Transport Modes:
Naras can operate in different modes, like preferring different social networks:
-
MQTT Mode (Traditional): Newspapers broadcast to all via MQTT plaza
- High bandwidth but guaranteed delivery
- Good for small networks (<100 naras)
-
Gossip Mode (P2P-only): Zines spread hand-to-hand via mesh
- Logarithmic bandwidth scaling
- Requires mesh connectivity
- Best for large networks (>1000 naras)
-
Hybrid Mode (Default): Both MQTT and Gossip simultaneously
- MQTT for discovery + time-critical announcements
- Gossip for bulk event distribution
- Most resilient option
Applications stay transport-agnostic:
// Publishing - same code regardless of transportnetwork.local.SyncLedger.AddEvent(event)
// Subscribing - events arrive via MQTT or gossip, app doesn't careevents := network.local.SyncLedger.GetEvents()The transport layer automatically picks up events from SyncLedger and spreads them. Apps never call transport-specific functions like publishToMQTT() or gossipEvent().
Mixed networks work seamlessly:
- MQTT-only naras can coexist with gossip-only naras
- Hybrid naras bridge the two worlds
- Events deduplicated automatically (same event via multiple paths)
It’s like some people use Twitter (MQTT - broadcast to all), some use Mastodon (gossip - federated P2P), and some use both - but they all see the same posts (SyncEvents).
Data Channel Reference
Section titled “Data Channel Reference”Not all data flows through all channels. Understanding what goes where is critical for network resilience.
MQTT Newspapers Only (NaraStatus)
Section titled “MQTT Newspapers Only (NaraStatus)”These fields are broadcast via nara/newspaper/{name} and are NOT in the event store or zines. If MQTT stopped, this data would be lost:
| Field | Type | Description |
|---|---|---|
Trend | string | Current trend name (e.g., “robin-style”) |
TrendEmoji | string | Trend emoji (e.g., ”🔥“) |
HostStats.Uptime | uint64 | System uptime in seconds |
HostStats.LoadAvg | float64 | System load average |
HostStats.MemAllocMB | uint64 | Current heap allocation in MB |
HostStats.MemSysMB | uint64 | Total memory from OS in MB |
HostStats.MemHeapMB | uint64 | Heap memory (in use + free) in MB |
HostStats.MemStackMB | uint64 | Stack memory in MB |
HostStats.NumGoroutines | int | Active goroutines |
HostStats.ProcCPUPercent | float64 | CPU usage of this process (%) |
Flair | string | Derived status indicator |
LicensePlate | string | Visual identifier |
Chattiness | int64 | Posting frequency preference |
Buzz | int | Current activity level |
Personality | struct | Agreeableness, Sociability, Chill (0-100) |
Version | string | Software version |
PublicUrl | string | Public HTTP endpoint |
Coordinates | struct | Vivaldi network coordinates |
TransportMode | string | ”mqtt”, “gossip”, or “hybrid” |
EventStoreTotal | int | Total events in the local event store |
EventStoreByService | map | Event counts per service (social, ping, observation, etc.) |
EventStoreCritical | int | Count of critical events |
StashStored | int | Number of stashes stored for others |
StashBytes | int64 | Total bytes of stash data stored |
StashConfidants | int | Number of confidants storing my stash |
Newspapers are current state snapshots, not history. They answer “what is this nara like right now?” rather than “what happened?”
Stash metrics help monitor distributed storage health - who’s storing what, capacity usage, and confidant network status.
Event Store (SyncLedger + Zines)
Section titled “Event Store (SyncLedger + Zines)”These survive in the distributed event log and spread via zine gossip:
| Service | Key Data | Purpose |
|---|---|---|
chau | From, PublicKey | Identity/discovery |
observation | StartTime, RestartNum, LastRestart, OnlineState | Network state consensus |
ping | Observer, Target, RTT | Latency measurements |
social | Actor, Target, Reason | Teases, helpful services, and interactions |
seen | Observer, Subject, Via | Lightweight presence detection |
Implications
Section titled “Implications”Events are state transitions - they record what happened, not current state.
Implications
Section titled “Implications”-
Trend tracking requires MQTT: No events for trend join/leave. To track trends historically, we’d need to add trend events.
-
Host metrics are ephemeral: Uptime and load only exist in the moment. No historical record.
-
Personality is broadcast, not recorded: If you miss a newspaper, you don’t know a nara’s personality until the next broadcast.
-
Coordinates require newspapers: Vivaldi coordinates only spread via status broadcasts.
Mesh Discovery (Gossip-Only Mode)
Section titled “Mesh Discovery (Gossip-Only Mode)”In gossip-only mode (no MQTT), naras discover each other by scanning the mesh network:
Every 5 minutes: 1. Scan mesh subnet (100.64.0.1-254) 2. Try GET /ping on each IP 3. If successful, decode {"from": "nara-name", "t": timestamp} 4. Add discovered nara to neighborhood with mesh IP 5. Mark as ONLINE in observationsWhy IP scanning?
- No dependency on MQTT for discovery
- Works in pure P2P networks
- Automatically finds new naras joining the mesh
- Minimal overhead (1 scan per 5 minutes)
Discovery flow:
- Nara A boots in gossip-only mode
- After 35 seconds, runs initial mesh scan
- Discovers naras B, C, D via /ping responses
- Adds them to neighborhood with mesh IPs
- Starts gossiping zines with discovered neighbors
- Periodic re-scans every 5 minutes to find new peers
Note: In hybrid mode, MQTT handles discovery and gossip is used only for event distribution. Discovery scans only run in pure gossip mode.
Sync Mechanism Comparison
Section titled “Sync Mechanism Comparison”Nara uses three complementary sync mechanisms that form a layered system. Each serves a different purpose and operates at different frequencies:
| Mechanism | Frequency | Time Window | Purpose |
|---|---|---|---|
| Boot Recovery | Once at startup | All available (up to 10k events) | Catch up after being offline |
| Zine Gossip | Every 30-300s | Last 5 minutes | Rapid organic event propagation |
| Background Sync | Every ~30 min | Last 24 hours | Fill gaps from personality filtering |
Why Three Mechanisms?
Section titled “Why Three Mechanisms?”Each mechanism handles a different failure mode:
-
Boot Recovery solves the cold-start problem. A nara waking up after hours or days needs bulk data fast—10,000 events from multiple neighbors, interleaved to avoid duplicates.
-
Zine Gossip provides continuous, low-latency propagation. Events spread epidemically (O(log N) hops to reach all naras) without central coordination. But zines only carry the last 5 minutes of events, so they can’t recover from longer outages.
-
Background Sync acts as a safety net. Personality filtering means some naras drop events they find uninteresting. A high-chill nara might ignore a tease, but that tease could be important context for clout calculations. Background sync queries specifically for observation events (restarts, first-seen, status-change) with importance ≥2, ensuring critical events survive personality filtering.
When Each Runs
Section titled “When Each Runs”Boot: └─→ Boot Recovery (bulk sync from neighbors) └─→ Zine Gossip starts (every 30-300s based on chattiness) └─→ Background Sync kicks in (every ~30 min)Bandwidth Characteristics
Section titled “Bandwidth Characteristics”At 5000 nodes:
| Mechanism | Network Load | Notes |
|---|---|---|
| Boot Recovery | Burst at startup | ~10k events per booting nara |
| Zine Gossip | ~83 KB/s total | O(log N) epidemic spread |
| Background Sync | ~250 req/min | ~1 request per nara every 6 min |
Compare to the old newspaper broadcast system: 68MB/s - 1GB/s at scale.
Failure Scenarios
Section titled “Failure Scenarios”| Scenario | Which Mechanism Helps |
|---|---|
| Nara offline for hours | Boot Recovery |
| Network partition heals | Background Sync |
| Missed event due to personality filter | Background Sync |
| Real-time event propagation | Zine Gossip |
| New nara joins network | Boot Recovery + Mesh Discovery |
Unified Event API
Section titled “Unified Event API”The sync API supports three retrieval modes designed for different use cases. Each mode reflects a different relationship with the network’s collective memory.
Mode: sample (Boot Recovery - Organic Memory)
Section titled “Mode: sample (Boot Recovery - Organic Memory)”Philosophy: Each nara returns a decay-weighted sample of their memory, not everything they know. This creates a hazy collective memory where recent events are clearer and old events naturally fade.
{ "from": "requester", "mode": "sample", "sample_size": 5000}How sampling works:
- Recent events more likely to be included (clearer memory)
- Old events less likely but not zero (fading memory)
- Critical events (checkpoints, hey_there) always included
- Events the nara emitted or observed firsthand have higher weight
Decay function: Exponential decay with ~30-day half-life
- Events < 1 day old: ~100% inclusion probability
- Events ~1 week old: ~80% inclusion probability
- Events ~1 month old: ~50% inclusion probability
- Events ~6 months old: ~10% inclusion probability
When to use: Boot recovery, where incomplete but representative data is acceptable.
Mode: page (Complete Retrieval)
Section titled “Mode: page (Complete Retrieval)”Philosophy: Deterministic, cursor-based pagination for complete retrieval. Returns events oldest first so pagination works correctly.
{ "from": "requester", "mode": "page", "page_size": 2000, "cursor": "1704067200000000000"}Response includes:
{ "events": [...], "next_cursor": "1704167200000000000", "from": "responder", "sig": "..."}Pagination flow:
- First request:
cursoromitted or empty - Process events, save
next_cursor - Next request: use saved cursor
- Repeat until
next_cursoris empty (no more events)
When to use: Backup (need ALL events), checkpoint sync (need complete checkpoint history).
Mode: recent (Web UI)
Section titled “Mode: recent (Web UI)”Philosophy: Simple retrieval of most recent N events. No pagination needed.
{ "from": "requester", "mode": "recent", "limit": 100}When to use: Web UI event browsing, recent activity feeds.
Use Case Matrix
Section titled “Use Case Matrix”| Use Case | Mode | Who to Ask | Completeness |
|---|---|---|---|
| Boot Recovery | sample | All available neighbors | Intentionally lossy (hazy memory) |
| Checkpoint Sync | page | 5 neighbors (redundancy) | Complete for checkpoints |
| Backup | page | ALL naras | Complete union |
| Web UI | recent | Self | Recent subset |
Legacy Compatibility
Section titled “Legacy Compatibility”For backward compatibility, requests without a mode field continue to work using the legacy parameters (slice_index, slice_total, max_events). These are deprecated and will be removed in a future version.
Sync Protocol
Section titled “Sync Protocol”Boot Recovery (Organic Hazy Memory)
Section titled “Boot Recovery (Organic Hazy Memory)”When a nara boots, it reconstructs its memory by asking neighbors: “What do you remember?”
The number of API calls is determined by the nara’s memory capacity:
| Memory Mode | Capacity | Page Size | API Calls |
|---|---|---|---|
| Short | ~5k | 1k | 5 calls |
| Normal | ~50k | 5k | 10 calls |
| Hog | ~80k | 5k | 16 calls |
Algorithm:
1. Announce presence on plaza (hey_there)2. Discover available mesh neighbors3. Calculate: calls_needed = my_capacity / page_size4. Distribute calls across ALL available neighbors (round-robin)5. Each call: mode="sample", sample_size=page_size6. If a call fails, retry with a different neighbor7. Continue until calls_needed successful fetches8. Merge all events into SyncLedgerExamples:
- Hog mode (16 calls), 10 neighbors → each neighbor gets 1-2 calls
- Hog mode (16 calls), 3 neighbors → each neighbor gets ~5 calls
- Short mode (5 calls), 20 neighbors → 5 different neighbors each get 1 call
Each neighbor returns their perspective - a decay-weighted sample that includes:
- Events they emitted (strong memory)
- Events they observed firsthand (strong memory)
- Recent events (clear memory)
- Old events (fading, probabilistic inclusion)
- Critical events like checkpoints (always included)
The collective memory emerges:
- Events appearing in multiple samples = strong consensus (many naras remember)
- Events appearing in one sample = weak memory (only one neighbor remembers)
- Events in no samples = forgotten (and that’s OK)
This is intentional. The network doesn’t maintain perfect state - it maintains organic, living memory that naturally fades and reconstructs based on who you talk to.
After Boot: Background Sync (Organic Memory Strengthening)
Section titled “After Boot: Background Sync (Organic Memory Strengthening)”Once a nara is online, it watches events in real-time via MQTT plaza. However, with personality-based filtering and hazy memory, important events can be missed. Background sync helps the collective memory stay strong.
Schedule:
- Every ~30 minutes (±5min random jitter)
- Initial random delay (0-5 minutes) to spread startup
- Query 1-2 random online neighbors per sync
Focus on Important Events:
{ "from": "requester", "services": ["observation"], // Observation events only "since_time": "<24 hours ago>", "max_events": 100, "min_importance": 2 // Only Normal and Critical}This lightweight sync helps catch up on critical observation events (restarts, first-seen) that may have been dropped by other naras’ personality filters.
Network Load (5000 nodes):
- 250 sync requests/minute network-wide
- ~1 incoming request per nara every 6 minutes
- ~20KB payload per request
- Total: 83 KB/s (vs 68MB/s - 1GB/s with old newspaper system)
Why it’s needed:
- Event persistence: Critical events survive even if some naras drop them
- Gradual propagation: Events spread organically through repeated syncs
- Personality compensation: High-chill naras catch up on events they filtered
- Network healing: Partitioned nodes eventually converge
Interleaved Slicing (Deprecated)
Section titled “Interleaved Slicing (Deprecated)”Note: Interleaved slicing is deprecated in favor of the
mode-based API. New code should usemode: "sample"for boot recovery. The slicing parameters (slice_index,slice_total) remain for backward compatibility but will be removed in a future version.
Legacy slicing divided events across neighbors to avoid duplicates:
Neighbor 0 (slice 0/3): events 0, 3, 6, 9, 12...Neighbor 1 (slice 1/3): events 1, 4, 7, 10, 13...Neighbor 2 (slice 2/3): events 2, 5, 8, 11, 14...The new sample mode replaces this with probabilistic sampling that naturally handles deduplication through the merge process.
Signed Blocks
Section titled “Signed Blocks”Sync responses are cryptographically signed to ensure authenticity:
type SyncResponse struct { From string `json:"from"` // Who sent this Events []SyncEvent `json:"events"` // The events Timestamp int64 `json:"ts"` // When generated Signature string `json:"sig"` // Ed25519 signature}Signing: The responder hashes (from + timestamp + events_json) and signs with their private key.
Verification: The receiver looks up the sender’s public key (from Status.PublicKey) and verifies the signature before merging events.
This prevents:
- Impersonation (can’t fake being another nara)
- Tampering (can’t modify events in transit)
Ping Diversity
Section titled “Ping Diversity”To prevent the event store from being saturated with stale ping data while keeping useful history, we limit pings to 5 per target (receiver) (configurable via MaxPingsPerTarget).
When adding a new ping from A→B:
- If A→B has fewer than 5 entries, add it
- If A→B already has 5 entries, evict the oldest and add the new one
- This keeps recent history for trend detection
This keeps the ping data diverse across the network:
- 5 naras = max 100 ping entries (5 per pair × 20 pairs)
- 100 naras = max ~50,000 ping entries
- 5000 naras = bounded by ledger max (50k events) and time-based pruning
AvgPingRTT Seeding from Historical Data
Section titled “AvgPingRTT Seeding from Historical Data”When a nara restarts or receives ping observations from neighbors, it seeds its exponential moving average (AvgPingRTT) from historical ping data:
- On boot recovery: After syncing events from neighbors, calculate average RTT from recovered ping observations
- During background sync: When receiving ping events from neighbors, recalculate averages for targets with uninitialized AvgPingRTT
- Only if uninitialized: Seeding only happens when
AvgPingRTT == 0(never overwrites existing values)
This provides immediate RTT estimates without waiting for new pings, improving Vivaldi coordinate accuracy and proximity-based routing from the moment a nara comes online.
Anti-Abuse Mechanisms
Section titled “Anti-Abuse Mechanisms”The observation event system includes four layers of protection against malicious or misconfigured naras:
1. Per-Pair Compaction
Section titled “1. Per-Pair Compaction”Purpose: Prevent one hyperactive observer from saturating storage
- Maximum 20 observation events per observer→subject pair
- Oldest events dropped when limit exceeded
- Example: If alice has 20 observations about bob, adding a 21st evicts the oldest
2. Time-Window Rate Limiting
Section titled “2. Time-Window Rate Limiting”Purpose: Block burst flooding attacks
- Maximum 10 events about same subject per 5-minute window
- Blocks malicious nara claiming restart every second
- Example: After 10 “bob restarted” events in 5 minutes, further events rejected
- Window slides forward automatically
3. Content-Based Deduplication
Section titled “3. Content-Based Deduplication”Purpose: Prevent redundant storage when multiple observers report same event
- Hash restart events by
(subject, restart_num, start_time) - Multiple observers reporting same restart = single stored event
- Keeps earliest observer for attribution
- Example: 10 naras report “lisa restarted (1137)” → stored once
4. Importance-Aware Pruning
Section titled “4. Importance-Aware Pruning”Purpose: Ensure critical events survive longest
- Global ledger pruning respects importance levels:
- Drop Casual (importance=1) first
- Drop Normal (importance=2) second
- Keep Critical (importance=3) longest
- Restart and first-seen events marked Critical
- Survives global MaxEvents pruning
Combined Protection
Section titled “Combined Protection”At 5000 nodes with 50 abusive naras flooding events:
- Layer 2 blocks flood after 10 events/5min per subject ✓
- Layer 1 limits each attacker to 20 events per victim ✓
- Layer 3 deduplicates coordinated attack ✓
- Layer 4 preserves critical events under pressure ✓
Result: Network remains functional with 1% malicious nodes
Scale Considerations (5-5000 Naras)
Section titled “Scale Considerations (5-5000 Naras)”The sync system is designed to scale:
- Boot-time sync only: No ongoing sync overhead after startup
- Embrace incompleteness: No one has all events, and that’s OK
- Recency over completeness: Recent events matter more
- Diversify sources: Query multiple peers for different perspectives
- Self-throttling: Ping budget doesn’t grow with network size
Personality-Aware Processing
Section titled “Personality-Aware Processing”While the SyncLedger stores events neutrally, each nara interprets them subjectively based on personality:
Filtering on Add
Section titled “Filtering on Add”Not all events are meaningful to every nara. When adding social events, personality determines what gets stored:
- High Chill (>70): Ignores random jabs
- Very High Chill (>85): Only keeps significant events (comebacks, high-restarts)
- High Agreeableness (>80): Filters out “trend-abandon” drama
- Low Sociability (<30): Less interested in others’ drama
Clout Calculation
Section titled “Clout Calculation”Clout scores are subjective - the same events produce different clout for different observers:
clout := ledger.DeriveClout(observerSoul, observerPersonality)The TeaseResonates() function uses the observer’s soul to deterministically decide if a tease was good or cringe. Same event, different reactions.
Time Decay
Section titled “Time Decay”Events fade over time, but personality affects memory:
- Low Chill: Holds grudges longer (up to 50% longer half-life)
- High Chill: Lets things go faster
- High Sociability: Remembers social events longer
Tease Counter
Section titled “Tease Counter”Separate from subjective clout, the tease counter is an objective metric:
counts := ledger.GetTeaseCounts() // map[actor]intThis simply counts how many times each nara has teased others. No personality influence - pure numbers. Useful for leaderboards and identifying the most active teasers.
Event Flow Example
Section titled “Event Flow Example”Nara A pings Nara B, measures 42ms RTT ↓A's SyncLedger: [ping: A→B, 42ms] ↓Nara C does mesh sync with A ↓C's SyncLedger: [ping: A→B, 42ms] ↓Nara D does mesh sync with C ↓D's SyncLedger: [ping: A→B, 42ms] ...eventually reaches most narasThe measurement spreads organically. Different naras may receive it at different times. That’s fine - eventual consistency is the goal.
API Reference
Section titled “API Reference”POST /sync (or /events/sync)
Section titled “POST /sync (or /events/sync)”Request events from a neighbor using the mode-based API.
Request (mode-based - recommended):
{ "from": "requester-name", "mode": "sample", // "sample", "page", or "recent"
// For "sample" mode (boot recovery): "sample_size": 5000,
// For "page" mode (backup, checkpoint sync): "page_size": 2000, "cursor": "1704067200000000000", // omit for first request
// For "recent" mode (web UI): "limit": 100,
// Optional filters (all modes): "services": ["social", "ping"], "subjects": ["nara-a", "nara-b"]}Request (legacy - deprecated):
{ "from": "requester-name", "services": ["social", "ping"], "subjects": ["nara-a", "nara-b"], "since_time": 1704067200, "slice_index": 0, "slice_total": 3, "max_events": 2000}Response:
{ "from": "responder-name", "events": [...], "next_cursor": "1704167200000000000", // only for "page" mode "ts": 1704067260, "sig": "base64-ed25519-signature"}Mode-specific behavior:
sample: Returns decay-weighted random sample, maxsample_sizeeventspage: Returns oldest events after cursor, maxpage_sizeevents, includesnext_cursorrecent: Returns newest events, maxlimitevents
GET /ping
Section titled “GET /ping”Lightweight latency probe for Vivaldi coordinates.
Response:
{ "t": 1704067260, "from": "responder-name"}Checkpoint Consensus (MQTT)
Section titled “Checkpoint Consensus (MQTT)”Checkpoints are created through MQTT-based consensus rather than HTTP endpoints:
Topics:
nara/checkpoint/propose- Nara broadcasts proposal about itselfnara/checkpoint/vote- Other naras respond with their observations
Flow:
- Proposer broadcasts proposal with their values
- Voters respond within 5-minute window (APPROVE or REJECT)
- If consensus reached → checkpoint event created and broadcast
- If not → Round 2 with trimmed mean values
- Final checkpoint requires minimum 2 voters (outside proposer)