Customer GPT

problem: How do you build an AI agent that doesn’t just sound like an expert, but actually thinks like one?

I’m working on a project where I’m trying to simulate a specific academic expert (a professor) as an AI agent. The goal isn’t just to copy their writing tone or structure. I want the agent to actually reason, analyze, and evaluate the way this person does in real life.

So far, I’ve collected a solid set of materials: research papers, editorials, podcast transcripts, and some peer review commentaries. I’ve also written some “reasoning templates” based on how this expert approaches scientific problems. I want the agent to use all of this to guide how it gives feedback or writes content.

I’ve explored a few approaches:

  1. Prompt-only agents: good for tone, but weak on reasoning.
  2. custom GPT with uploaded documents better grounding, but doesn’t evolve with new input unless retrained.
  3. Vector database with retrieval-based prompting lets me feed the agent dynamic content from the expert’s work and inject it into the prompt at runtime. More flexible and scalable.

I’m using n8n for the orchestration and OpenAI for the model. I’m thinking of embedding more of this expert’s work into a vector DB and retrieving relevant pieces depending on the task.

Would love to hear how others have tackled this. Especially if you’ve tried building agents that reflect a person’s thinking, not just their writing style. What’s worked for you, and what hasn’t?