Meta released Llama 3.2, a family of vision-capable large language models and lightweight text models for edge devices. The new lineup includes 11 billion and 90 billion parameter multimodal models that can reason about images, outperforming Claude 3 Haiku and GPT4-mini on visual understanding tasks. Meta also launched 1 billion and 3 billion parameter models optimized for on-device use with 128K token context lengths, with the 3B model outperforming Gemma 2 2.6B and Phi 3.5-mini on tasks like instruction following and summarization. The company also introduced Llama Stack distributions to simplify deployment across various environments, and updated safety tools, including Llama Guard 3 for vision tasks. (Meta AI)
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| ✨ New course! Enroll in Introducing Multimodal Llama 3.2 | 4 | 448 | October 11, 2024 | |
| Meta shrinks Llama models for faster on-device AI | 1 | 82 | October 28, 2024 | |
| Meta releases Llama 4 models, claiming superior performance | 3 | 208 | April 8, 2025 | |
| Llama 3.2 prompting tutorial notebook | 1 | 264 | December 10, 2024 | |
| Error in lesson 3 multi model prompting | 0 | 26 | November 15, 2025 |