In Agentic AI module 2 Andrew explains that going outside the LLM for reflection is powerful and assists the LLM in refining the output. The use case given was to run code. Curious what other scenarios people are using this pattern for and what sources they are using that do not relate to code?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Why don't LLMs build reflection into prompting by default? | 2 | 41 | January 4, 2026 | |
| Good prompt references | 4 | 98 | October 20, 2025 | |
| Prompts Engineering | 0 | 158 | April 27, 2024 | |
| LLMs available for course work | 3 | 23 | January 9, 2026 | |
| Real-life LLM workflow options (connecting code to LLM service e.g. ChatGPT Mac) | 5 | 264 | January 15, 2025 |