In the context of LLMs, “hallucination” refers to a phenomenon where the model generates text that is incorrect, nonsensical, or not real. Since LLMs are not databases or search engines, they would not cite where their response is based on. These models generate text as an extrapolation from the prompt you provided. The result of extrapolation is not necessarily supported by any training data, but is the most correlated from the prompt.
J oin in on this workshop where we will showcase some powerful metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) with a focus on both RAG and fine-tuning use cases.
What attendees can expect to takeaway from the workshop:
- Deep dive into research-backed metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) while building LLM powered applications.
- Evaluation and experimentation framework while prompt engineering with RAG, as well as while fine-tuning with your own data
- Demo led practical guide to building guardrails and mitigating hallucinations while building LLM powered applications
This event is inspired by DeepLearning.AI’s GenAI short courses, created in collaboration with AI companies across the globe. Our courses help you learn new skills, tools, and concepts efficiently within 1 hour.
A bout Galileo
At Galileo we are building the first algorithm-powered LLMOps Platform for the enterprise. Galileo provides ML teams with an intelligent ML data bench to collaboratively improve data quality across their model workflows – from pre-training, to post-production.Galileo is currently powering ML teams across the Fortune 500 as well as startups across multiple industries.
Vikram is the co-founder and CEO at Galileo – an evaluation, experimentation and observability platform for language models.
Prior to Galileo, Vikram led Product Management at Google AI, where his team leveraged language models to build models for the Fortune 2000 across retail, financial services, healthcare and contact centers. He also led Product for Google Pay in India, taking it from 0 to 100M monthly users – the most downloaded fintech app globally. Vikram was one of the early members of the Android OS team, and believes GenAI is ushering in a similar technological wave to mobile.
Atindriyo is the co-founder and CTO at Galileo – an evaluation, experimentation and observability platform for language models.
Prior to Galileo, Atindriyo was an Engineering Leader at Uber AI, responsible for various Machine Learning initiatives at the company. He was one of the architects of the world’s first Features store (Michelangelo) and early engineers on Siri at Apple, building their foundational technology and infrastructure that democratized ML at Apple.