Hi,
From Malaga-AI community we are launching again the Study Group Initiative for a third year in a row. In the previous editions we have collaborated with the DLAI community and some members here brought great contributions.
This year’s the topic is AI Safety & Evals, focusing on trust, fairness, and LLM alignment.
We’re looking for motivated people to run their own 3-month research project on topics like hallucination, bias, calibration, or sycophancy.
What’s included: access to open & closed LLMs, a private eval platform, team support, and a completion certificate.
All the info, proposal template and instructions are in our Discord channel:
certified_study_group_ai_safety_evals
Deadline: April 12th — Selected students announced: April 14th
Join us and push the frontier of responsible AI! Looking forward to receiving your proposals.
Best wishes,
A. Rosa Castillo