Safety-AI Study Group for AI-Communities

Hi,

From Malaga-AI community we are launching again the Study Group Initiative for a third year in a row. In the previous editions we have collaborated with the DLAI community and some members here brought great contributions.

This year’s the topic is AI Safety & Evals, focusing on trust, fairness, and LLM alignment.

We’re looking for motivated people to run their own 3-month research project on topics like hallucination, bias, calibration, or sycophancy.

What’s included: access to open & closed LLMs, a private eval platform, team support, and a completion certificate.

All the info, proposal template and instructions are in our Discord channel:
⁠certified_study_group_ai_safety_evals

Deadline: April 12th — Selected students announced: April 14th

Join us and push the frontier of responsible AI! Looking forward to receiving your proposals.

Best wishes,

A. Rosa Castillo

Ok the deadline has been extended to the 17th of April to give more people the possibility to apply. Sharing the doc explaining the project here. Feel free to ask me any questions about how to participate.

Join the discord channel: Malaga-AI

and the docs are: Discord

Best,

A. Rosa Castillo