Introducing Building with Llama 4 , a short course, created in collaboration with Meta and taught by Amit Sangani, Director of Partner Engineering for Meta’s AI team.
Meta’s new Llama 4 has added three new models and introduced a Mixture-of-Experts (MOE) architecture to its family of models, making them more efficient to serve.
In this course, you’ll work with two of the three new models introduced in Llama 4. First is “Maverick” a 400-billion parameter model, with 128 experts and 17 billion active parameters. Second is “Scout,” a 109-billion parameter model with 16 experts and 17 billion active parameters. Both Maverick and Scout support long context windows, of up to a million tokens and 10 million tokens respectively. The latter is enough to support very large GitHub repos for analysis.
In hands-on lessons, you’ll build apps using Llama 4’s long-context and its new multi-modal capabilities including reasoning across multiple images and “image grounding,” in which you can identify elements and reason within specific image regions. You’ll also learn about Llama’s newest tools: its prompt optimization tool that automatically improves system prompts, and synthetic data kit that generates high-quality datasets to fine-tune your model.