Project Summary
We are currently witnessing the rapid ascent of Large Language Models (LLMs). While this technological advancement is remarkable, it’s equally crucial to acknowledge and address their inherent limitations and, perhaps more significantly, their vulnerabilities in terms of security. With this understanding in mind, our collaborative project was conceived with the purpose of gathering insights from historical and contemporary exploits of LLMs. Through informative talks, we aim to disseminate these insights within the community, fostering greater awareness and understanding.
What you can learn
Security issues in LLMs
Privacy issues in LLMs
Differences between open and closed LLMs
Different exploits and how to find them
How you can participate?
Ask to be added to the github repo with your github account and start researching exploits of different LLMs. If you find any interesting-relevant exploit to share, first check the issues list in case it has been added already by another contributor. Finally add the issue following the template with the model and exploit name decorating the issue with a tag “solved” if the exploit has already been fixed and cannot be reproduced.
Outcome of the project
Once we reach the end of the timeline we will select the best exploits for a final online talk to share with the Deeplearning.AI community
Welcome onboard!
The github repo is not public that is why you cannot see it. In order to access the repo as a contributor you need to pass me your github username and I will add you as a contributor to the project.
Ok you should have all received an invitation to join the project. This weekend we will post some issues we discovered already on the first phase of this project that you can use as a template to see how to report an exploit.
Besides we will organize a short online catch-up next week with all collaborators to answer questions and to explain more in detail how to organize the collaborations.