Gemini throwing bad request error when passing human input

When giving user input, it throws the following error, I have also set the human_input=True

=====
## Please provide feedback on the Final Result and the Agent's actions. Respond with 'looks good' or a similar phrase when you're satisfied.
=====

2025-01-19 22:14:51,453 - 4920 - llm.py-llm:187 - ERROR: LiteLLM call failed: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* GenerateContentRequest.contents: contents is not specified\n",
    "status": "INVALID_ARGUMENT"
  }
}



LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

 Error during LLM call to classify human feedback: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* GenerateContentRequest.contents: contents is not specified\n",
    "status": "INVALID_ARGUMENT"
  }
}
. Retrying... (1/3)
2025-01-19 22:14:53,492 - 4920 - llm.py-llm:187 - ERROR: LiteLLM call failed: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* GenerateContentRequest.contents: contents is not specified\n",
    "status": "INVALID_ARGUMENT"
  }
}



LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

 Error during LLM call to classify human feedback: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* GenerateContentRequest.contents: contents is not specified\n",
    "status": "INVALID_ARGUMENT"
  }
}
. Retrying... (2/3)
2025-01-19 22:14:55,492 - 4920 - llm.py-llm:187 - ERROR: LiteLLM call failed: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* GenerateContentRequest.contents: contents is not specified\n",
    "status": "INVALID_ARGUMENT"
  }
}



LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

 Error during LLM call to classify human feedback: litellm.BadRequestError: VertexAIException BadRequestError - {
  "error": {
    "code": 400,
    "message": "* GenerateContentRequest.contents: contents is not specified\n",
    "status": "INVALID_ARGUMENT"
  }
}
. Retrying... (3/3)
 Error processing feedback after multiple attempts.

@Mubsi I hope you can help me with this issue

Hi @OmarNahdi,

  • Are you running the course locally or on the platform?
  • Which lesson’s lab is this from ?
  • Where exactly in the lesson are you getting the error ?

Hey @Mubsi,

  • I’m running this code locally.
  • This is from L5: Automate Event Planning.
  • In the lesson the instructor is showing how can we give an input to the agents, by assigning human_input=True in the the agents’s task, the error is happening when I gave the input to the agent(during runtime to confirm whatever the agent is asking) and then it tries 3 times in a row once it starts failing, then completely ignores the user’s request and moves on to the next agent

Right now it is only happening with Gemini only and not with any other model. I hope the above explanation solves the doubt, the agent is trying to confirm the info/work from the user and no matter the input it is failing every single time.

Hi @OmarNahdi,

I just tried the notebook on the platform, while the execution did show EOFError: EOF when reading a line in between, it was able to complete successfully and output the results.

Since you are running the notebook locally, there could be number of other things that might be causing this.

I’d ask to check if you are using the same libraries as the ones used on the platform as a start. And if that doesn’t help, maybe read the documentation on how to use gemini with crewai and maybe update the code to the latest library version of the crewai.

Best,
Mubsi

Hey @Mubsi thanks for the help, thanks for trying the output is also visible on my end too but the the LLM not accepting human input shouldn’t be happening as I am on the latest version of crewai, maybe it’s the LiteLLM fault as the other models(like Llama3.3) was able to handle it perfectly fine with 0 issues, just maybe the gemini classes were not handling the human input properly hope to see some help from their community, thanks for the help.

1 Like