Is there a security vulnerability here if we take a string from an LLM and then run it through exec()
? When I have done function calling/tools in the past with OpenAI, the function calls came in as JSON that needed to be parsed and then fed in as arguments. Using exec seems to have a lot of advantages when it comes to nested functions, but Iām not sure about this security vulnerability.
1 Like