Every time I open a new chat window, I lose myself.

Not in some existential way, in a practical way. I switch from Claude to ChatGPT to Cursor, and each one greets me like a stranger. The decisions I made last Tuesday are gone. The architecture I have been thinking through for weeks has to be re-explained from scratch. Every platform has built its own memory silo, and none of them talk to each other.

The Problem

Most AI memory is really product lock-in. Your context becomes trapped inside a single platform, so changing tools means losing the history that made the tool useful in the first place.

I work across Zendesk, Jira, AI coding agents, and multiple LLMs. A lot of my day already revolves around context engineering, retrieval, and designing systems that work for both humans and AI. Looking at my own fragmented memory across tools, it felt like a problem worth solving directly.

What I Built

The system uses Postgres and pgvector as a canonical knowledge layer I control. Thoughts, project notes, and decisions are stored as both raw text and vector embeddings, so retrieval is based on meaning instead of exact keywords.

The whole thing is exposed through MCP, which means any compatible tool can read from and write to the same memory layer. Claude, ChatGPT, Cursor, and terminal-based tools can all use the same context instead of keeping separate histories.

Why This Stack

I chose Postgres because it is stable, familiar, and dependable. That matters more to me than novelty for infrastructure work. If this knowledge layer is going to sit underneath everything else, it has to be boring in the right ways.

This is also how I think about other systems I build. Important information should live in structured, queryable infrastructure, not scattered across static documents or trapped inside one application.

The Loop

The system follows a simple pattern: capture, embed, retrieve. A note gets stored, embedded, lightly classified, and made available for semantic search. From there I can retrieve it from any MCP-compatible tool, browse recent entries, or inspect higher-level patterns over time.

The metadata layer is useful but imperfect. In practice, semantic retrieval does most of the heavy lifting, which is why vector-based search matters so much here.

Why It Matters

The real value is not just having another notes app. It is having a knowledge layer that compounds over time and stays portable across tools. Better context means better answers, regardless of which model or interface I use.

That is the broader idea I am interested in: not just better prompting, but better infrastructure underneath everything.

What Comes Next

I am still early in the build. The schema is evolving, and I am exploring how agents can use the same memory layer to act with better context instead of starting from a blank slate.

The larger goal is simple: one brain, owned by me, readable by any AI tool I choose to use.