We built traceAI, an open-source tool for tracing LLM calls in production

Hey everyone 👋

Been lurking here for a while and finally have something to share.

We've been building traceAI, an open-source tool for tracing 
LLM calls in production. It captures inputs, outputs, latency, 
costs, and errors with minimal setup.

Built it because we kept running into the same problem: once an 
LLM app goes to production, debugging what actually happened 
becomes really painful.

Repo is live now: https://github.com/future-agi/traceAI

Curious if others have run into the same observability gaps. 
What's your current setup for monitoring LLM apps in prod?

1 Like