v0.2.0 · Open Source

Intelligence needs
a place to live.

The local-first, explainable open-source memory engine for embedded Rust applications and AI agent systems, with episodic context, continuity-aware planner traces, and lifecycle-aware historical recall.

6
Workspace Crates
2
Planner Profiles
4
Transport Profiles
MIT
Licensed
Core Capabilities

Built for Recall.

Explainable Recall

Mnemara doesn't just return results. Every recall query provides per-hit score breakdowns, scorer family and profile selection, planning traces with planner stages, candidate sources, filter reasons, and correlation IDs — delivering fully auditable retrieval you can trust and debug.

Scoring Profiles Planning Traces Planner Stages Correlation IDs

Episodic Context

Records can carry continuity state, salience, causal links, and optional affective annotations, so recall can answer more than flat keyword matches.

Dual Deployment

Embed directly as a Rust crate for in-process, lowest-latency recall — or run as a standalone gRPC daemon with TCP, Unix domain sockets, TLS, and mutual TLS transport profiles.

Durable Storage

Sled-backed and file-backed stores with batch upserts, compaction with rollup summaries, lineage-preserving supersession, snapshots, integrity checks, repair workflows, and portable export/import packages that round-trip across backends.

Built for builders.

01 Install in Seconds

Add the facade crate with your preferred storage backend. One dependency, zero configuration boilerplate.

cargo add mnemara --features sled

02 Store Episode Context

Add continuity state, causal links, and salience to each record so the engine can reason about active threads instead of plain text blobs.

03 Recall With Explanation

Query an open episode, keep the result chronological, and get back score breakdowns and trace data for every hit.

cargo run -p mnemara-server
main.rs
use std::collections::BTreeMap;

use mnemara::{
    EPISODE_SCHEMA_VERSION, EpisodeContext, EpisodeContinuityState,
    EpisodeSalience, MemoryQualityState, MemoryRecord, MemoryRecordKind,
    MemoryScope, MemoryStore, MemoryTrustLevel, RecallFilters, RecallQuery,
    RecallTemporalOrder, SledMemoryStore, SledStoreConfig, UpsertRequest,
};

let store = SledMemoryStore::open(
    SledStoreConfig::new("./mnemara-data"),
)?;

let scope = MemoryScope {
    tenant_id: "default".into(),
    namespace: "agent".into(),
    actor_id: "ava".into(),
    conversation_id: Some("thread-42".into()),
    session_id: Some("session-7".into()),
    source: "cli".into(),
    labels: vec!["ops".into()],
    trust_level: MemoryTrustLevel::Verified,
};

store.upsert(UpsertRequest {
    record: MemoryRecord {
        id: "storm-follow-up".into(),
        scope: scope.clone(),
        kind: MemoryRecordKind::Episodic,
        content: "Reconnect storm follow-up is still open after rollback.".into(),
        summary: Some("Open reconnect storm follow-up".into()),
        source_id: None,
        metadata: BTreeMap::new(),
        quality_state: MemoryQualityState::Active,
        created_at_unix_ms: 1_717_240_000_000,
        updated_at_unix_ms: 1_717_240_000_000,
        expires_at_unix_ms: None,
        importance_score: 0.92,
        artifact: None,
        episode: Some(EpisodeContext {
            schema_version: EPISODE_SCHEMA_VERSION,
            episode_id: "reconnect-storm".into(),
            summary: Some("Reconnect storm remediation".into()),
            continuity_state: EpisodeContinuityState::Open,
            actor_ids: vec!["ava".into()],
            goal: Some("Close the reconnect rollout checklist".into()),
            causal_record_ids: vec!["storm-root-cause".into()],
            related_record_ids: vec!["rollback-note".into()],
            salience: EpisodeSalience {
                reuse_count: 4,
                novelty_score: 0.3,
                goal_relevance: 0.95,
                unresolved_weight: 0.9,
            },
            ..Default::default()
        }),
        historical_state: Default::default(),
        lineage: vec![],
    },
    idempotency_key: Some("storm-follow-up-v1".into()),
}).await?;

let results = store.recall(RecallQuery {
    scope,
    query_text: "what changed in the reconnect storm?".into(),
    max_items: 5,
    token_budget: None,
    filters: RecallFilters {
        episode_id: Some("reconnect-storm".into()),
        unresolved_only: true,
        temporal_order: RecallTemporalOrder::ChronologicalAsc,
        ..Default::default()
    },
    include_explanation: true,
}).await?;
Feature Overview

Memory With Structure.

Episodic Recall

Episodes, continuity, and salience in one model

Records can carry episode IDs, continuity state, chronology, causal links, related records, and linked artifacts so recall has real thread structure to work with.

Salience signals such as reuse, novelty, goal relevance, and unresolved weight let important open loops stay visible without flattening everything into keyword search.

Optional affective annotations and recurrence or boundary cues make the memory model expressive enough for agent workflows that unfold over time.

Episode Context Continuity State Salience Signals Causal Links
Explainability

Retrieval you can inspect instead of trust blindly

Every recall can return per-hit score breakdowns across lexical, temporal, metadata, episodic, salience, and policy channels.

Planning traces expose the active planner profile, planner stages, candidate sources, matched terms, filter reasons, and trace IDs so operators can debug why a memory surfaced.

The same explainability surface works across embedded usage and daemon-backed deployments, which keeps auditability close to the retrieval path.

Score Breakdown Planner Traces Candidate Sources
Deployment Flexibility

Embed or run as a service

Use the Rust crate for lowest-latency local memory, or run the standalone gRPC daemon with HTTP/JSON admin endpoints and TCP, UDS, TLS, or mTLS transport profiles.

Embedded Rust gRPC Daemon
Lifecycle Controls

History stays queryable

Historical and superseded states, lineage links, compaction rollups, integrity checks, repair flows, and maintenance stats let memory evolve without losing the thread of what changed.

Portability

Move memory without lock-in

Snapshot, export, import, validate, merge, replace, and dry-run workflows round-trip across file-backed and sled-backed stores, with a reference JavaScript SDK for non-Rust consumers.

Workspace Layout

Modular by Design.

mnemara-core

Domain Model

Product-neutral domain model and async store traits. Scoped memory records, episodic context, historical state, lineage, recall queries, explanations, planning traces, and maintenance report types.

mnemara-store-*

Storage Backends

Sled-backed and file-backed stores with batch upsert, snapshot, lifecycle-aware compaction, supersession, integrity check, repair, and portable export/import workflows.

mnemara-server

gRPC Daemon

Tonic-based daemon with protobuf API, HTTP/JSON admin endpoints, bounded admission control, request traces, planner metadata, lifecycle counters, and TCP/UDS/TLS/mTLS deployment profiles.

Auth & Security

Bearer-token auth with role-scoped read, write, admin, and metrics permissions. TLS and mTLS transport.

Observability

Request traces, correlation IDs, runtime admission status, metrics export, and retention telemetry.

Portability

Backend-neutral export/import packages with validate, merge, replace, and dry-run flows.

JavaScript SDK

Reference HTTP SDK for non-Rust consumers. Full typed interface over the daemon's JSON API.

Community Driven

Radically Transparent.

Mnemara is built on the belief that AI memory infrastructure should be open and auditable. The entire codebase is MIT-licensed — free to use, modify, and distribute for any purpose.

MIT Licensed

Free to use, modify, and distribute for any purpose.

Modular Workspace

Clean Rust workspace with individually publishable crates.

Open_Source_Engine

STABLE_RELEASE_V0.2.0
MIT_LICENSE = TRUE