Building an agentic Gen AI workflow used to mean stitching together custom memory stores, vector databases, and orchestrating the processes between those components. With the new experimental MongoDB Atlas Vector Store and MongoDB Chat Memory nodes in n8n you can do it all without a single line of custom code.

In the walkthrough below, you will create an AI agent that remembers multi‑turn conversations and performs semantic search over your own data. The result is a context‑aware assistant that can power everything from travel recommendations to internal knowledge bots — running entirely inside n8n.

Whether you prefer full‑code control, established frameworks like LangChain/LlamaIndex, or no‑code speed with n8n, the pattern is the same: store memory in MongoDB, vector‑index your content, and let an LLM reason over the results.

Let’s see how!

The new MongoDB nodes in n8n
The new MongoDB nodes in n8n

This comprehensive guide will walk you through building an AI-powered traveling agent using the robust capabilities of MongoDB Atlas. We'll leverage MongoDB's memory and advanced vector search features to create an intelligent agent capable of providing personalized travel recommendations, optimizing itineraries, and delivering a seamless travel planning experience.

By using MongoDB Atlas as the backbone of this solution, you'll unlock several key advantages that make the AI agent both powerful and developer-friendly:

  • Persistent memory – capture and recall chat history across sessions. 
  • High‑performance semantic search – Atlas Vector Search delivers millisecond similarity queries over billions of vectors.
  • No‑code friendly – drag‑and‑drop nodes in n8n; zero infrastructure glue. 
  • Scalable & secure – built on fully‑managed MongoDB Atlas.
  • Plug‑and‑play embeddings – works with OpenAI or any other embedding models supported.

Here’s a quick walkthrough of what this workflow does:

  1. User input: The user interacts with the AI Traveling Agent through a natural language interface, providing details about their travel preferences, interests, budget, and constraints.
  2. Intent recognition: The AI model analyzes the user's input to understand their intent (e.g., destination search, itinerary planning, activity recommendations).
  3. Data retrieval: The agent retrieves relevant data from MongoDB Atlas (user profile, historical data) and external sources (travel information, reviews).
  4. Personalized recommendations: The AI model generates personalized travel recommendations based on the user's intent, preferences, and retrieved data.
  5. Itinerary optimization: The agent optimizes the travel itinerary based on factors like time, budget, distance, and user preferences.
  6. Interactive experience: The user interacts with the agent to refine the itinerary, explore alternatives, and make bookings.
  7. Continuous learning: The AI model learns from user interactions and feedback to improve future recommendations and personalize the travel planning experience further.

Prerequisites for building an AI-powered traveling agent

  • A MongoDB Atlas project and cluster (M10+ recommended).
  • API keys for:
    • Gemini (Google AI Studio) or OpenAI for the Chat Model node.
    • OpenAI or any other provider for embeddings.
  • A vector index created on the target collection (example JSON below).
  • Basic JSON/REST familiarity for optional data ingestion. 
💡
Heads‑up: The MongoDB Atlas Vector Store and Chat Memory nodes are currently experimental. Expect naming, UI, or payload changes in future n8n releases.

The new MongoDB nodes in n8n

Here’s what’s now available:

🔍 MongoDB Atlas Vector Store Node

  • Index and query high-dimensional vector embeddings.
  • Integrate with OpenAI or Voyage AI embedding models.
  • Connect seamlessly to your MongoDB Atlas collections and indexes.

💾 MongoDB Chat Memory Node

  • Capture and store conversation history.
  • Support for memory retrieval in real-time for dynamic agent workflows.
  • Enable persistence across sessions.

These work hand-in-hand with the AI Agent Tooling in n8n, allowing natural language queries to drive powerful, database-integrated automations.

Key components of an AI-powered traveling agent

  • MongoDB Atlas: Our core database, providing scalable storage, flexible data models, and powerful querying capabilities.
  • Memory Features: Utilize MongoDB's in-memory computing to accelerate data access and enable real-time responses for an interactive user experience.
  • Vector Search: Employ MongoDB's vector search functionality to find similar items, locations, and preferences based on semantic meaning, enhancing the relevance of travel recommendations.
  • AI models: Integrate machine learning models for natural language processing, sentiment analysis, and predictive modeling to understand user intent, preferences, and behavior.
  • External data sources: Connect to external APIs and data sources to gather real-time information on flights, accommodations, attractions, weather, and more.

Key benefits of an AI-powered traveling agent

  • Personalized travel planning: The AI Traveling Agent tailors travel recommendations to individual user preferences, ensuring a unique and satisfying experience.
  • Efficient itinerary optimization: The agent optimizes travel itineraries to save time, money, and effort, while maximizing the user's enjoyment.
  • Seamless user experience: The natural language interface and interactive features provide a user-friendly and intuitive travel planning experience.
  • Real-time information: The agent accesses real-time data to provide up-to-date travel information and recommendations.
  • Scalability and flexibility: MongoDB Atlas's cloud-native architecture ensures scalability and flexibility to handle varying user demands and data volumes.

By leveraging the power of MongoDB Atlas and AI, you can create a sophisticated AI Traveling Agent that transforms the way people plan and experience travel.

Steps to build an AI-powered traveling agent 

Now let’s get started with the step-by-step instructions!

AI Agent Powered by MongoDB Atlas for Memory and Vector Search
AI Agent Powered by MongoDB Atlas for Memory and Vector Search

Step 1: Set up credentials

  1. Set up your Google API credentials for the Gemini LLM.
  2. Set up your OpenAI credentials for the embedding nodes.

Step 2: Provision MongoDB Atlas and configure the MongoDB nodes

  1. Provision a MongoDB Atlas project and cluster an d get your connection string. Make sure your IP Access List is enabled (for testing, you can allow 0.0.0.0/0).
  2. Configure your MongoDB credentials in n8n with the correct connection string and database name.
  3. Vector Search tool—leverage a MongoDB Atlas Vector Search index on your points_of_interest collection (needs to be created on the MongoDB Atlas collection upfront):
// index name: "vector_index"
// If you change embedding provider, ensure numDimensions matches the model.
{
  "fields": [
    {
      "type": "vector",
      "path": "embedding",
      "numDimensions": 1536,
      "similarity": "cosine"
    }
  ]
}

Step 3: Ingest data

Once configured, send data into the workflow via a webhook (see example below). This will populate your points_of_interest collection with vectorized records.

Then, ask your agent questions like "Where should I go for a romantic getaway?" to see it in action.

curl -X POST "https://<your-n8n-instance>/webhook-test/ingest" \
  -H "Content-Type: application/json" \
  -d '{
    "point_of_interest": {
      "title": "Eiffel Tower",
      "description": "Iconic iron lattice tower located on the Champs-Élysées."
    }
  }'

Step 4: Testing your agent

  • After populating your MongoDB collection with vectorized points of interest data, you can start querying your agent.
  • Ask questions like "Where should I go for a romantic getaway?"
  • The agent will leverage the vector search index to find relevant points of interest based on the semantic similarity of your query and the data stored in your MongoDB collection.

By following these detailed steps, you'll be able to create a powerful gen AI workflow that leverages MongoDB Vector Search and Memory Nodes in n8n to deliver intelligent and contextually relevant responses to user queries.

Wrap-up

Now that your gen AI pipeline is ready, you can:

  • ✅ Use it as a knowledge bot or internal assistant.
  • 📂 Connect it to business documents or product catalogs.
  • 🧠 Give your LLM context over time via stored memory.
  • 🤖 Expand it with tools like Zapier, HTTP calls, or webhook triggers.
  • 📈 Monitor how queries are stored, recalled, and embedded over time.

You can easily adapt this flow to:

  • Customer support bots
  • Personal assistants
  • Internal documentation search
  • Agentic task handlers (e.g., "schedule a meeting based on context").

Why MongoDB + n8n is a game-changer

As a Lead Developer Advocate at MongoDB, I’ve worked with countless developers trying to stitch together memory, embeddings, and LLMs. This integration radically simplifies the stack.

It’s:

  • No-code/low-code friendly
  • Secure and scalable via MongoDB Atlas
  • Extensible with support for multiple embedding providers.

Next steps & resources