I plan to use OpenAI LLM for retrieval augmented generation, ensuring no bias or incorrect information in responses. The ‘LLMOps’ course discusses implementing a post-processing filter. However, waiting for the complete response before applying post-processing may cause delays. Streaming, similar to ‘Chat-GPT,’ is an option but poses challenges for post-processing. Seeking community guidelines on handling such situations.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Great Course, real bootstrap to LLM chats. One question | 0 | 136 | January 8, 2024 | |
Seeking advice on open-source llm selection | 1 | 205 | April 17, 2024 | |
LLM to enhance the output of the audio | 0 | 204 | March 1, 2024 | |
GenAI conversational application with multi tool interaction | 3 | 128 | February 19, 2024 | |
Week 1 What LLMs can and cannot do - Context limitation | 3 | 45 | December 23, 2024 |