`exec` with function calling seems vulnerable to prompt injection/unsanitized inputs

Is there a security vulnerability here if we take a string from an LLM and then run it through exec()? When I have done function calling/tools in the past with OpenAI, the function calls came in as JSON that needed to be parsed and then fed in as arguments. Using exec seems to have a lot of advantages when it comes to nested functions, but I’m not sure about this security vulnerability.

1 Like