Skip to content

Riverse

Personal memory engine for AI — extracts, ages, and resolves your memories from every conversation. Runs locally. Works with any LLM.
v0.1.7 · Single-user recommended

RAG Memory vs. River Algorithm

Why similarity search isn't enough for a personal AI.

RAG / Existing AI Memory

Retrieve then Forget

  • Keyword / vector similarity retrieval — finds text that "looks similar", no understanding of what matters to you
  • No timeline — a fact from 3 years ago and yesterday are weighted equally
  • No causal reasoning — "you mentioned headaches" and "you started a new job" stay as isolated fragments, never connected
  • Equal weight recall — your passing comment about a movie ranks the same as your core life values
  • Cloud-dependent — every message round-trips to a remote model; your data lives on someone else's server

Riverse / River Algorithm

Understand then Evolve

  • Personal profile weighting — information is ranked by relevance to you, not by text similarity
  • Temporal evolution — recent events carry momentum; old patterns solidify or fade naturally over time
  • Causal inference — connects "started new job → stress → headaches → sleep problems" into a coherent chain
  • Confidence levels — observations progress from suspected → confirmed → established through multi-turn verification
  • Local-first — memory lives on your device; remote LLM is called only when local cognition can't solve the problem
Scenario: You said "I've been having headaches lately" — two months after mentioning a job change
RAG Approach
I've been having headaches lately
I'm sorry to hear that. Here are some common causes of headaches: dehydration, screen time, stress... Would you like some tips?

Vector search found "headache" in medical Q&A corpus. No connection to your life context.

River Algorithm
I've been having headaches lately
You started the new management role about two months ago, and around the same time your sleep got worse — you mentioned staying up past 1am several times. The headaches might be linked. How's the work pressure been? Want me to track this as a health pattern?

Causal chain: job change → sleep disruption → headaches. Timeline-aware, personally weighted.

The Future: Your Device, Your Intelligence

Today's AI is cloud-centered — every thought goes through someone else's server. Riverse inverts this. Your personal device holds the memory, the profile, the context. It understands you locally. Only when it encounters a problem beyond local capacity does it formulate a precise question and make a single call to a remote LLM — like consulting a specialist, not outsourcing your brain.

Local Memory & Profile Local Cognition Can solve locally? If not → describe problem → one remote call

This is the foundation for running a truly personal AI on phones, watches, and personal devices — where you own your data, your profile, and your intelligence.

River Algorithm

The core cognition model that makes Riverse different.

Conversations flow like water, key information settles like riverbed sediment, progressively upgrading from "suspected" to "confirmed" to "established" through multi-turn verification. Offline consolidation (Sleep) acts as the river's self-purification.

Conversation flows in ──→ Erosion ──→ Sedimentation ──→ Shapes cognition ──→ Keeps flowing
                           │              │                   │
                           │              │                   └─ Confirmed knowledge → stable bedrock
                           │              └─ Key info → observations, hypotheses, profiles
                           └─ Outdated beliefs washed away, replaced by new insights

Flow

Every conversation is water flowing through. The river never stops — understanding of you evolves continuously and never resets.

Sediment

Key information settles like silt: facts sink into profiles, emotions into observations, patterns into hypotheses. Confirmed knowledge sinks deeper.

Purify

Sleep is the river's self-purification — washing away outdated info, resolving contradictions, integrating fragments into coherent understanding.

Features

Everything you need for a truly personal AI.

Persistent Memory

Remembers across sessions. Builds a timeline-based profile that evolves with you.

Offline Consolidation

Processes conversations after they end — extracts insights, resolves contradictions, strengthens confirmed knowledge.

Multi-Modal Input

Text, voice, images — all understood natively via Whisper, GPT-4 Vision, and LLaVA.

Pluggable Tools

Finance tracking, health sync (Withings), web search, vision, TTS, and more.

YAML Skills

Create custom behaviors with simple YAML — trigger by keyword or cron schedule.

External Agents

Connect Home Assistant, n8n, Dify and more via agent configs.

Multi-Channel

Telegram, Discord, REST API, WebSocket, CLI, and Web Dashboard.

Flexible LLM

Ollama for local inference. Cloud mode is compatible with any OpenAI-compatible API.

Proactive Outreach

Follows up on events, checks in when idle, respects quiet hours.

Vector embeddings — retrieves relevant memories by meaning, not just keywords. Requires an Ollama embed model.

MCP Protocol

Model Context Protocol support for Gmail and other MCP servers.

Tech Stack

Layer Technology
Runtime Python 3.10+, PostgreSQL 16+
Local LLM Ollama (any compatible model)
Cloud LLM Any OpenAI-compatible API (OpenAI, DeepSeek, Groq, and more)
Embeddings Ollama + any embed model (pgvector auto-accelerated if available)
REST API FastAPI + Uvicorn
Web Dashboard Flask
Telegram python-telegram-bot (async)
Discord discord.py (async)
Voice / Vision Whisper-1, GPT-4 Vision, LLaVA
TTS Edge TTS