AIMultiple ResearchAIMultiple ResearchAIMultiple Research
AI MemoryModel Context Protocol
Updated on Aug 15, 2025

How to Connect Your AI Apps with MCP Memory [2025]

In this tutorial, we’ll explore how to integrate Claude with Cursor to implement context-aware shared memory with OpenMemory MCP. This integration will allow us to demonstrate how memory is retrieved and managed in real-time. We’ll also dive into testing MCP’s ability to differentiate between various projects.

The problem with no shared memory between tools

It’s common to use tools like Claude for generating ideas. Typically, I take that plan and switch between different tools. However, without shared memory, different AI tools don’t remember past decisions, previous architectural choices, or the nuances of your codebase.

This raises several problems:

  • Context is lost between tools: When you make changes in one tool, such as updating a plan or adjusting the tech stack in Claude, Cursor has no awareness of those changes when you move to it. You need to re-enter all the details manually. For example, each time you return to the project, you need to re-explain things like the framework you’re using (e.g., React with Redux or Flutter with Bloc).

  • Data duplication: You also need to repeat information across different tools. For example, the app’s tech stack, layout, and features need to be redefined in Cursor, even though they were already set in Claude.

  • Reduced productivity: Since the tools don’t share memory, you end up constantly reintroducing the same context.

How memory and MCP work together?

In the Memory MCP system, all your AI tools, MCP clients (e.g. Cursor) work together by leveraging both memory and model Context Protocol (MCP).

  • Memory stores personalized, persistent information about user preferences and recurring tasks. For example, Cursor remembers your preferred programming languages, tone, and workflow details.

  • MCP provides real-time access to dynamic, task-specific data, like project files or APIs, during a session. It ensures up-to-date, context-specific information is retrieved when needed.

Introduction to OpenMemory

OpenMemory is a tool released by MemZero that provides a unified memory system across multiple AI apps and agents. Think is as a “memory chip” that integrates all your MCP clients into a single, continuous memory space. It can be used locally or in the cloud.

A high-level example of how OpenMemory is used for personalized context-aware chat (using ChatGPT)1

Tutorial: OpenMemory MCP

Requirements

Before we begin, ensure that you have the following set up:

  1. Docker: Required for running the OpenMemory and MCP servers locally (install Docker).
  2. Git: For cloning the necessary repositories.
  3. OpenMemory: You’ll need to install OpenMemory for interacting with the LLM servers locally.
  4. API Key: You’ll need your API key for LLM interactions. In this example, we will be using an OpenAI API key.

Setting up OpenMemory

Security note: When setting up and using OpenMemory, it’s recommended not to share private API keys, passwords, or any other sensitive information directly in your code or public repositories.

Step 1: Clone the repository

The OpenMemory GitHub folder is located inside the mem folder because mem is the main repository. To get OpenMemory, we need to clone the entire mem repository.

1. Go to the MemZero repository and copy the repository link (https://github.com/mem0ai/mem0.git).

2. Open your terminal and type git clone followed by the GitHub repository link.

3. Navigate to the directory: After cloning, go inside the mem0 folder (type in cd mem0) and locate the OpenMemory folder (type in cd openmemory/). You’ll then navigate into that, and all further commands will happen from there.

To get OpenMemory running, you need to run both the UI and the MCP server.

Thus, you need to install Docker to containerize them. With Docker, both the UI and MCP server can be packaged with all their dependencies into containers, ensuring consistent execution across different environments.

This allows you to set up, run, and scale the UI and MCP server without worrying about manual configuration. 

Step 2: Set up Docker and build containers

If Docker is not installed on your system, download and install it from Docker’s website.

Build the Docker containers: Run the following “make build” command to build the containers, which will install the necessary dependencies:

Start the containers: After the containers are built, start them using typing in “make up”.

You only need to run make build once. After that, you can simply run make up to start the containers when needed.

Additionally, to use the MCP server, Docker must be running on your system. Until you have access to the cloud, you’ll need to keep Docker active to run it locally.

If you go to the make tab, you can see that the OpenMemory MCP server is now up and running:

OpenMemory MCP server should be running at: http://localhost:8765 by default.

Step 3: Set up the MCP server

Now that Docker is running, the next step is to set up the MCP server to manage the memory layer.

The OpenMemory UI runs at http://localhost:3000 by default. With Docker running, the MCP server will be up and accessible locally at this address. To ensure the server is running properly, simply navigate to http://localhost:3000 in OpenMemory dashboard.

Step 4: Configure clients (Claude/ Cursor in this example)

To connect Claude and Cursor (or any other agent) to the shared memory system, open the directory in Cursor, tap in cursor in your terminal:

Configure the OpenAI API key: Open the API folder in the project structure. Once you open it, it’ll look something like this, and the file structure will appear like this:

Inside the file structure, go to the API folder. In there, you’ll find an av_example file. 

You need to paste your OpenAI API key into this file. Copy it, rename it to .env by removing the word “example” from the file name, and then paste your actual API key into it. 

Once that’s done, you’ll be able to use the makeup command. They’ve listed this step as a prerequisite because it’s required for LLM interactions, which is why they ask for the Open API key.

Step 5: Set up MCP in tools

We need to install the MCP for different tools. We have the MCP link, which you have to manually configure in the settings. When you run it, it automatically adds the MCP to the Claude client for you. The same applies to Cursor. 

You can see that I installed both of these MCPs. Here, is is installed Claude and Cursor:

You can see that in Cursor it’s already up and running:

Example MCP implementation

This example shows how to use the MCP server. You can open Claude Desktop and ask it to brainstorm ideas for different types of apps or solutions. In this example, we will explore how to generate a concept for a time-tracking app.

First, Claude provides its own plan. You can follow up with the points listed below, focusing on features that should be implemented.

After it adds the changes into the original plan, you can ask it to save the plan in memory as “time-track plan.”

You can see the memory details from the right end tab in the OpenMemory dashboard. Here are the memory details:

All images presented under the section this section are sourced from the AI Labs MCP Tutorial2

Moving on to Cursor, you can give it a prompt like “I want to build a time-track app” and ask if it can pull the details from memory.

Next, it used the MCP tools to list and search the memory. This feature is incredibly useful as it searches for relevant information. For example, when it queried about the time-track app, it retrieved all memories related to that topic.

From there, it pulled details about Next.js, React, TypeScript, and the rest of the stack to be used. It then began building the app. 

After completing the task, you can ask it to save the progress to memory, and it will. It adds progress notes, breaks everything into manageable chunks, and stores that as well. Now, all the updates are saved in memory.

Here is the app created:

Challenges after app creation

After a while, Cursor gave an error when starting a new chat due to context size. Once a new chat is started, you can ask it to retrieve memories related to the app’s progress. It called the MCP tool again and retrieved all the relevant data, such as where it was running and what had been done so far.

Then, a screenshot was provided because the contrast in some React elements was poor, and the text wasn’t visible. You can ask it to fix the UI, which it did.

While working on that, it kept calling itself repeatedly, trying to locate the source directory. However, it didn’t identify that there was a front-end folder.

How MCP, Cursor, and Claude retrieved and managed memory

Now, what you want to see is how it actually retrieves the memories. 

The main thing to note is that this memory is linked to all other memories created in the same session. So, if the MCP client requests a memory labeled “time,” it also fetches related memories. 

You can check the source app for each memory. Some were created by Cursor, others by Claude. If you open up the memory, you can view the access log, change the status, or even edit the memory itself.

Testing the MCP’s ability to differentiate between projects

To test if the MCP system can differentiate between different projects, change the tech stack to the MERN  stack, and instruct Claude Desktop to push this new information to the MCP server.

Next, open Cursor and query which tech stack will be used for the new project. Make sure to tell it to use only the MCP and avoid checking the project directory for additional context.

Test results:

When the MCP call is made, the system gets confused, pulling in both tech stacks, the MERN stack from the previous project and Next.js from the new one, retrieving them together.

This shows how the MCP system couldn’t separate the contexts of the two projects.

So, while the MCP system can store memory across sessions, it may occasionally mix information from different projects.

Final thoughts

While the MCP system is a strong start, it needs better memory separation for similar projects to prevent data overlap. It works well for single projects or those with distinct names, but improvements in query execution and memory management would make it even more powerful. 

What are the other applications of Memory MCP?

1. Multi-agent research assistant with a memory layer

Multiple LLM agents specialize in different research domains (e.g., academic papers, GitHub repositories, news). Each agent stores its findings in memory, which the master agent can later query for related context.

Real-life example: Anthropic Multi-Agent Research System.3

2. Meeting assistant with persistent cross-session memory

An assistant stores meeting summaries, action items, and key notes, retrieving relevant context for future meetings.

Real-life example: Otter.ai is a meeting assistant that captures key points and retrieves them in future sessions to maintain context. For more on Otter.ai see: AI note taker

3. Agentic coding assistant that evolves with usage

Coding assistants that learns from usage patterns and store solutions for recurring issues, automatically retrieving and applying past solutions to enhance productivity.

Real-life example: GitHub Copilot, a coding assistant learning from the developer’s code style. For more on cognitive agents, see: AI agent memory.

Share This Article
MailLinkedinX
Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 55% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments