Hey deeplearning.ai community!
I’m wondering if anyone else has run into the problem of using MCP Servers with Claude Code, and then seeing their usage skyrocket to the point where the session has used up all tokens.
I’m guessing some MCP’s are set up in a way that keeps context more lean, while others maybe have a massive payload of context?
Just wanted to pose a general question here and see if anyone has any good tips or workarounds for this problem. Thanks!