← Back to blog
PrivacyFebruary 25, 202611 min read

RAG Without Leaking Customer Data: A Practical Guide

Build a customer support RAG chatbot that never exposes personal data to your LLM provider. Covers attack surfaces, multi-turn conversations, GDPR compliance, and a production-ready architecture.

Customer support is the most common use case for RAG. You have a database of support tickets, a knowledge base of product documentation, and you want to build a chatbot that can answer questions using that data. The problem? Those support tickets are full of customer PII — email addresses, phone numbers, account details, and sometimes even payment information. When your RAG pipeline retrieves tickets and injects them into an LLM prompt, all of that customer data flows to your LLM provider.

The correct approach is a two-layer architecture: selective redaction at ingestion and tokenization at query time. At ingestion, you redact contact information (emails, phone numbers, IBANs, credit card numbers, addresses) from tickets before storing them in your vector database — but you keep names intact so tickets remain searchable by customer name. At query time, you search the vector store with the original user question to get the best retrieval results, then tokenize the retrieved context and question together before sending them to the LLM. This gives you strong privacy without sacrificing search quality.

This article walks through the attack surface, shows you how to build a secure customer support RAG pipeline using this search-first approach, and provides a compliance checklist for GDPR, CCPA, and SOC 2.

The Attack Surface

A customer support RAG pipeline has three places where PII can leak:

1. Vector Database Storage

When you index support tickets into a vector database, the PII in those tickets is stored as plain text alongside the embeddings. Anyone with access to the vector store — a developer, a database admin, an attacker who compromises your infrastructure — can read customer names, emails, and account numbers directly. The embeddings themselves also encode semantic information about the PII, making it possible to search for specific customers.

2. Retrieval-to-Prompt Injection

When a user asks a question, the retriever pulls the most relevant documents and injects them into the LLM prompt. If those documents contain PII from multiple customers, the LLM sees all of it. A support agent asking about one customer might inadvertently send another customer's data to the model.

3. LLM Provider Logs

Your LLM provider receives the full prompt, including all retrieved context. Providers typically log prompts for monitoring, debugging, and abuse detection. Even if they offer a zero-data-retention policy, the data still traverses their infrastructure and is visible during processing.

Building a Secure Customer Support RAG

Here is a production-ready pattern for a GDPR-compliant customer support chatbot. It uses a CustomerSupportRAG class with two key design choices: selective redaction at ingestion (keeping names for searchability while removing contact details) and a search-first query flow that tokenizes only after retrieval:

python
import os
import chromadb
from blindfold import Blindfold
from openai import OpenAI

class CustomerSupportRAG:
    def __init__(self):
        self.blindfold = Blindfold(
            api_key=os.environ["BLINDFOLD_API_KEY"],
            region="eu",  # GDPR: process in EU
        )
        self.openai = OpenAI()
        self.collection = chromadb.Client().create_collection("tickets")
        self.conversation_history = []
        self.accumulated_mapping = {}

    def ingest(self, tickets):
        # Selective redaction: remove contact info but keep names
        # so tickets remain searchable by customer name
        for i, ticket in enumerate(tickets):
            result = self.blindfold.redact(
                ticket,
                policy="gdpr_eu",
                entities=[
                    "email address",
                    "phone number",
                    "iban",
                    "credit card number",
                    "address",
                    "date of birth",
                    "national id number",
                ],
            )
            self.collection.add(
                documents=[result.text], ids=[f"t-{i}"]
            )

    def query(self, question):
        # Search with original question for best retrieval quality
        results = self.collection.query(
            query_texts=[question], n_results=3
        )
        context = "\n".join(results["documents"][0])

        # Tokenize context and question together in a single call
        combined = f"Context:\n{context}\n\nQuestion: {question}"
        tokenized = self.blindfold.tokenize(combined, policy="gdpr_eu")
        self.accumulated_mapping.update(tokenized.mapping)

        # Build messages with conversation history
        messages = [
            {"role": "system", "content": tokenized.text},
            *self.conversation_history,
            {"role": "user", "content": "Answer the question above based on the context."},
        ]

        completion = self.openai.chat.completions.create(
            model="gpt-4o-mini", messages=messages
        )
        ai_response = completion.choices[0].message.content

        # Store tokenized history (no PII in memory)
        self.conversation_history.append({"role": "user", "content": tokenized.text})
        self.conversation_history.append({"role": "assistant", "content": ai_response})

        # Detokenize for the end user
        restored = self.blindfold.detokenize(ai_response, self.accumulated_mapping)
        return restored.text

Multi-Turn Conversation Handling

In a customer support chatbot, users ask follow-up questions. “What was Hans's billing issue?” followed by “Can you give me more details?” — the second question does not mention Hans by name, but the LLM needs to maintain context.

Since tokenization happens after retrieval, you always search with the original question to get the best matches, then tokenize the retrieved context and question together before sending them to the LLM. The key pattern is mapping accumulation — each time you tokenize, you merge the new mapping into the accumulated mapping so tokens from earlier turns can still be detokenized in later responses:

python
# Turn 1: "What was Hans Mueller's issue?"
results = collection.query(query_texts=[question_1], n_results=3)
context_1 = "\n".join(results["documents"][0])

# Tokenize context + question together after retrieval
combined = f"Context:\n{context_1}\n\nQuestion: {question_1}"
tokenized = blindfold.tokenize(combined)
accumulated_mapping.update(tokenized.mapping)
# mapping: {"<Person_1>": "Hans Mueller", ...}

# Turn 2: "Can you email them the refund confirmation?"
results = collection.query(query_texts=[question_2], n_results=3)
context_2 = "\n".join(results["documents"][0])

combined = f"Context:\n{context_2}\n\nQuestion: {question_2}"
tokenized = blindfold.tokenize(combined)
accumulated_mapping.update(tokenized.mapping)
# mapping still includes <Person_1> from turn 1

# When detokenizing, all tokens from all turns are resolved
restored = blindfold.detokenize(ai_response, accumulated_mapping)

Important: Store conversation history with tokenized text, not original PII. This means your chat logs, session storage, and any middleware caches contain no personal data.

Security Trade-offs

There is no one-size-fits-all approach to PII protection in RAG pipelines. The right choice depends on your threat model, search requirements, and operational complexity budget. Here are the three main strategies:

Selective Redaction (Recommended)

Redact contact information (emails, phone numbers, IBANs, credit cards, addresses) at ingestion but keep names intact. This is the approach used in the code above. Names remain in the vector store, so users can search by customer name and get accurate retrieval results. Contact details are permanently removed and never reach the LLM. This offers the best balance of privacy and usability for most customer support use cases.

Full Redaction

Redact all PII, including names, at ingestion. This provides the strongest privacy guarantee because no personal data exists in your vector store at all. The trade-off is that you lose the ability to search by customer name — queries like “What was Hans Mueller's issue?” will not match tickets where the name has been removed. Use this when privacy requirements are absolute and name-based search is not needed.

Tokenize with Stored Mapping

Tokenize (rather than redact) at ingestion and store the token-to-value mappings alongside each document. This gives you full privacy (only tokens in the vector store) plus the ability to restore original values when needed. However, it requires managing and securing the mapping storage, and you need to tokenize search queries with matching mappings for retrieval to work. This adds operational complexity and is best suited for systems where you need both maximum privacy and full reversibility.

Performance Considerations

  • Redaction at ingestion is a one-time cost. You redact documents once when you index them. After that, retrieval is unaffected — the vector store serves redacted text at the same speed.
  • Tokenization adds minimal latency. A typical tokenize call takes 100–200ms for a user question. This is negligible compared to the LLM call, which is usually 1–3 seconds.
  • Detokenization is free. detokenize() is a local string replacement — no API call. It runs in microseconds.
  • Use batch processing for large document sets. blindfold.redact_batch() processes multiple documents in a single API call, and AsyncBlindfold supports concurrent processing.

Compliance Checklist

Use this checklist to verify your customer support RAG pipeline meets compliance requirements:

GDPR (EU)

  1. Data minimization. Only anonymized tokens are sent to the LLM provider. Use policy="gdpr_eu" to detect EU-specific entities.
  2. EU data residency. Configure region="eu" to ensure PII processing stays within the EU.
  3. Right to erasure. Since PII is redacted from the vector store, deleting the original data source is sufficient — no PII persists in embeddings.
  4. Audit trail. Blindfold logs every tokenization operation for DPA compliance.

CCPA (California)

  1. Do not sell personal information. Redacted data in the vector store contains no personal information that could be considered a “sale” under CCPA.
  2. Right to know. You can track which entities were detected and redacted using the response metadata.
  3. Right to delete. Same as GDPR — no PII in the vector store.

SOC 2

  1. Data encryption in transit. All Blindfold API calls use TLS 1.2+.
  2. Access controls. PII never reaches the LLM provider, reducing the blast radius of any provider-side breach.
  3. Monitoring. Blindfold provides usage dashboards and API logs for audit purposes.

Try It Yourself

Clone the complete customer support RAG example and have it running in minutes:

Start protecting sensitive data

Free plan includes 500K characters/month. No credit card required.