InternalServerError: 500 error During task with name 'agent'

Hi ,
I get the below error in Lesson 3 when I try to get response
response = response_agent.invoke(
{“messages”: [{“role”: “user”, “content”: “Jim is my friend”}]},
config=config
)

ERROR:
InternalServerError: 500 error
During task with name ‘agent’ and id ‘ecdbd979-d488-e527-a648-58b2ac6616b1’

Kindly hlep

@skr687

Try after sometime probably after 24-48 hour if the error still occurs, let us know here. Make sure when you try again clear your browsing and cache history.

1 Like

you are charging every month and out of which it wont work for 2 days , its been already 2 days , this is rdiculous, and looks like the subscription will be over by the time your service is up and i have to pay again

Didnt expect this from deeplearning.ai

internal server error is not dependent completely on deeplearning.ai only, any courses involves multiple interfaces if you understand AI technicalities and this error could be from multiple dependencies used while creating a lab.

@Lesly can you please look into this issue if the issue is from our side or developer dependencies.

I had the same error in “long-term-agentic-memory-with-langgraph”, so i moved to this course, and now here I face the same error,

thank you for reporting this, I will bring this issue to be highly prioritized in addressing your concern.

just be sure can mention the lab name, so that I can check if I am also encountering the same error.

long-term-agentic-memory-with-langgraph –>lesson_3
I get error in fine-tuning-and-reinforcement-learning-for-llms-intro-to-post-training - Module 1 →graded lab–M1_G1_Inspecting_Finetuned_vs_Base_Model - i will raise tisone in teh relevant Topic

can you share the link as I cannot find lab name you mentioned in the course you mentioned

response = response_agent.invoke(
{“messages”: [{“role”: “user”, “content”: “Jim is my friend”}]},
config=config
)

1 Like

agree I got the same error, I think the agent using the model, seems to have deprecated causing this issue. @skr687 thank you for notifying.

Posting the complete error here so staff can notice it


InternalServerError                       Traceback (most recent call last)
Cell In[35], line 1
----> 1 response = response_agent.invoke(
      2     {"messages": [{"role": "user", "content": "Jim is my friend"}]},
      3     config=config
      4 )

File /usr/local/lib/python3.11/site-packages/langgraph/pregel/__init__.py:2069, in Pregel.invoke(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, **kwargs)
   2067 else:
   2068     chunks = []
-> 2069 for chunk in self.stream(
   2070     input,
   2071     config,
   2072     stream_mode=stream_mode,
   2073     output_keys=output_keys,
   2074     interrupt_before=interrupt_before,
   2075     interrupt_after=interrupt_after,
   2076     debug=debug,
   2077     **kwargs,
   2078 ):
   2079     if stream_mode == "values":
   2080         latest = chunk

File /usr/local/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1724, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
   1718     # Similarly to Bulk Synchronous Parallel / Pregel model
   1719     # computation proceeds in steps, while there are channel updates.
   1720     # Channel updates from step N are only visible in step N+1
   1721     # channels are guaranteed to be immutable for the duration of the step,
   1722     # with channel updates applied only at the transition between steps.
   1723     while loop.tick(input_keys=self.input_channels):
-> 1724         for _ in runner.tick(
   1725             loop.tasks.values(),
   1726             timeout=self.step_timeout,
   1727             retry_policy=self.retry_policy,
   1728             get_waiter=get_waiter,
   1729         ):
   1730             # emit output
   1731             yield from output()
   1732 # emit output

File /usr/local/lib/python3.11/site-packages/langgraph/pregel/runner.py:230, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
    228 t = tasks[0]
    229 try:
--> 230     run_with_retry(
    231         t,
    232         retry_policy,
    233         configurable={
    234             CONFIG_KEY_SEND: partial(writer, t),
    235             CONFIG_KEY_CALL: partial(call, t),
    236         },
    237     )
    238     self.commit(t, None)
    239 except Exception as exc:

File /usr/local/lib/python3.11/site-packages/langgraph/pregel/retry.py:40, in run_with_retry(task, retry_policy, configurable)
     38     task.writes.clear()
     39     # run the task
---> 40     return task.proc.invoke(task.input, config)
     41 except ParentCommand as exc:
     42     ns: str = config[CONF][CONFIG_KEY_CHECKPOINT_NS]

File /usr/local/lib/python3.11/site-packages/langgraph/utils/runnable.py:506, in RunnableSeq.invoke(self, input, config, **kwargs)
    502 config = patch_config(
    503     config, callbacks=run_manager.get_child(f"seq:step:{i + 1}")
    504 )
    505 if i == 0:
--> 506     input = step.invoke(input, config, **kwargs)
    507 else:
    508     input = step.invoke(input, config)

File /usr/local/lib/python3.11/site-packages/langgraph/utils/runnable.py:262, in RunnableCallable.invoke(self, input, config, **kwargs)
    260     context = copy_context()
    261     context.run(_set_config_context, child_config)
--> 262     ret = context.run(self.func, *args, **kwargs)
    263 except BaseException as e:
    264     run_manager.on_chain_error(e)

File /usr/local/lib/python3.11/site-packages/langgraph/prebuilt/chat_agent_executor.py:639, in create_react_agent.<locals>.call_model(state, config)
    637 def call_model(state: AgentState, config: RunnableConfig) -> AgentState:
    638     _validate_chat_history(state["messages"])
--> 639     response = cast(AIMessage, model_runnable.invoke(state, config))
    640     # add agent name to the AIMessage
    641     response.name = name

File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:3024, in RunnableSequence.invoke(self, input, config, **kwargs)
   3022             input = context.run(step.invoke, input, config, **kwargs)
   3023         else:
-> 3024             input = context.run(step.invoke, input, config)
   3025 # finish the root run
   3026 except BaseException as e:

File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:5360, in RunnableBindingBase.invoke(self, input, config, **kwargs)
   5354 def invoke(
   5355     self,
   5356     input: Input,
   5357     config: Optional[RunnableConfig] = None,
   5358     **kwargs: Optional[Any],
   5359 ) -> Output:
-> 5360     return self.bound.invoke(
   5361         input,
   5362         self._merge_configs(config),
   5363         **{**self.kwargs, **kwargs},
   5364     )

File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:284, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    273 def invoke(
    274     self,
    275     input: LanguageModelInput,
   (...)
    279     **kwargs: Any,
    280 ) -> BaseMessage:
    281     config = ensure_config(config)
    282     return cast(
    283         ChatGeneration,
--> 284         self.generate_prompt(
    285             [self._convert_input(input)],
    286             stop=stop,
    287             callbacks=config.get("callbacks"),
    288             tags=config.get("tags"),
    289             metadata=config.get("metadata"),
    290             run_name=config.get("run_name"),
    291             run_id=config.pop("run_id", None),
    292             **kwargs,
    293         ).generations[0][0],
    294     ).message

File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:860, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    852 def generate_prompt(
    853     self,
    854     prompts: list[PromptValue],
   (...)
    857     **kwargs: Any,
    858 ) -> LLMResult:
    859     prompt_messages = [p.to_messages() for p in prompts]
--> 860     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:690, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    687 for i, m in enumerate(messages):
    688     try:
    689         results.append(
--> 690             self._generate_with_cache(
    691                 m,
    692                 stop=stop,
    693                 run_manager=run_managers[i] if run_managers else None,
    694                 **kwargs,
    695             )
    696         )
    697     except BaseException as e:
    698         if run_managers:

File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:925, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    923 else:
    924     if inspect.signature(self._generate).parameters.get("run_manager"):
--> 925         result = self._generate(
    926             messages, stop=stop, run_manager=run_manager, **kwargs
    927         )
    928     else:
    929         result = self._generate(messages, stop=stop, **kwargs)

File /usr/local/lib/python3.11/site-packages/langchain_anthropic/chat_models.py:814, in ChatAnthropic._generate(self, messages, stop, run_manager, **kwargs)
    812     return generate_from_stream(stream_iter)
    813 payload = self._get_request_payload(messages, stop=stop, **kwargs)
--> 814 data = self._client.messages.create(**payload)
    815 return self._format_output(data, **kwargs)

File /usr/local/lib/python3.11/site-packages/anthropic/_utils/_utils.py:275, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    273             msg = f"Missing required argument: {quote(missing[0])}"
    274     raise TypeError(msg)
--> 275 return func(*args, **kwargs)

File /usr/local/lib/python3.11/site-packages/anthropic/resources/messages/messages.py:904, in Messages.create(self, max_tokens, messages, model, metadata, stop_sequences, stream, system, temperature, tool_choice, tools, top_k, top_p, extra_headers, extra_query, extra_body, timeout)
    897 if model in DEPRECATED_MODELS:
    898     warnings.warn(
    899         f"The model '{model}' is deprecated and will reach end-of-life on {DEPRECATED_MODELS[model]}.\nPlease migrate to a newer model. Visit https://docs.anthropic.com/en/docs/resources/model-deprecations for more information.",
    900         DeprecationWarning,
    901         stacklevel=3,
    902     )
--> 904 return self._post(
    905     "/v1/messages",
    906     body=maybe_transform(
    907         {
    908             "max_tokens": max_tokens,
    909             "messages": messages,
    910             "model": model,
    911             "metadata": metadata,
    912             "stop_sequences": stop_sequences,
    913             "stream": stream,
    914             "system": system,
    915             "temperature": temperature,
    916             "tool_choice": tool_choice,
    917             "tools": tools,
    918             "top_k": top_k,
    919             "top_p": top_p,
    920         },
    921         message_create_params.MessageCreateParams,
    922     ),
    923     options=make_request_options(
    924         extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    925     ),
    926     cast_to=Message,
    927     stream=stream or False,
    928     stream_cls=Stream[RawMessageStreamEvent],
    929 )

File /usr/local/lib/python3.11/site-packages/anthropic/_base_client.py:1289, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1275 def post(
   1276     self,
   1277     path: str,
   (...)
   1284     stream_cls: type[_StreamT] | None = None,
   1285 ) -> ResponseT | _StreamT:
   1286     opts = FinalRequestOptions.construct(
   1287         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1288     )
-> 1289     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File /usr/local/lib/python3.11/site-packages/anthropic/_base_client.py:966, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    963 else:
    964     retries_taken = 0
--> 966 return self._request(
    967     cast_to=cast_to,
    968     options=options,
    969     stream=stream,
    970     stream_cls=stream_cls,
    971     retries_taken=retries_taken,
    972 )

File /usr/local/lib/python3.11/site-packages/anthropic/_base_client.py:1055, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
   1053 if remaining_retries > 0 and self._should_retry(err.response):
   1054     err.response.close()
-> 1055     return self._retry_request(
   1056         input_options,
   1057         cast_to,
   1058         retries_taken=retries_taken,
   1059         response_headers=err.response.headers,
   1060         stream=stream,
   1061         stream_cls=stream_cls,
   1062     )
   1064 # If the response is streamed then we need to explicitly read the response
   1065 # to completion before attempting to access the response text.
   1066 if not err.response.is_closed:

File /usr/local/lib/python3.11/site-packages/anthropic/_base_client.py:1104, in SyncAPIClient._retry_request(self, options, cast_to, retries_taken, response_headers, stream, stream_cls)
   1100 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
   1101 # different thread if necessary.
   1102 time.sleep(timeout)
-> 1104 return self._request(
   1105     options=options,
   1106     cast_to=cast_to,
   1107     retries_taken=retries_taken + 1,
   1108     stream=stream,
   1109     stream_cls=stream_cls,
   1110 )

File /usr/local/lib/python3.11/site-packages/anthropic/_base_client.py:1055, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
   1053 if remaining_retries > 0 and self._should_retry(err.response):
   1054     err.response.close()
-> 1055     return self._retry_request(
   1056         input_options,
   1057         cast_to,
   1058         retries_taken=retries_taken,
   1059         response_headers=err.response.headers,
   1060         stream=stream,
   1061         stream_cls=stream_cls,
   1062     )
   1064 # If the response is streamed then we need to explicitly read the response
   1065 # to completion before attempting to access the response text.
   1066 if not err.response.is_closed:

File /usr/local/lib/python3.11/site-packages/anthropic/_base_client.py:1104, in SyncAPIClient._retry_request(self, options, cast_to, retries_taken, response_headers, stream, stream_cls)
   1100 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
   1101 # different thread if necessary.
   1102 time.sleep(timeout)
-> 1104 return self._request(
   1105     options=options,
   1106     cast_to=cast_to,
   1107     retries_taken=retries_taken + 1,
   1108     stream=stream,
   1109     stream_cls=stream_cls,
   1110 )

File /usr/local/lib/python3.11/site-packages/anthropic/_base_client.py:1070, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
   1067         err.response.read()
   1069     log.debug("Re-raising status error")
-> 1070     raise self._make_status_error_from_response(err.response) from None
   1072 return self._process_response(
   1073     cast_to=cast_to,
   1074     options=options,
   (...)
   1078     retries_taken=retries_taken,
   1079 )

InternalServerError: 500 error
During task with name 'agent' and id 'bcf99c01-b3a8-ee28-b94f-0f582f2ed84c'

@skr687

I have escalated your issue at multiple place. I understand the inconvenience. Please wait for the staff to respond on this thread.

Thank you for reporting @skr687 !

I just tested it and got the same InternalServerError :frowning:

Doing some research, just found out some issues (github issues i1, i2) related to Anthropic<>LangGraph.

According to the LangGraph documentation, we might need some more upgrades (from langchain_anthropic import ChatAnthropic, model sonnet-4-5-…) for our notebooks. I’ll share this with the engineering team.

Notice, this issue is affecting to Lesson 3 and Lesson 5 were we use the sonnet model, however in Lesson 4 we use ‘openai’ and that issue is not happening.
For curiosity, I tested using ‘openai’ model in Lesson 3, and it worked.

For your peace of mind, this issue has been reported to our team (as a bug/upgrade) for review and resolution. The changes will be reflected in the notebooks soon.

@skr687

you mentioned you got this error at two places or courses, can you please let lesly know which is another place you got the error.

no, other lab there was no error, it was a graded a lab and i didnt code a missing part, the None response I was getting, I was assuming theLLM is not returning anything, ignore that

Why is fixing this issue taking this long??

read the staff response

ok thanks will use openai:gpt-4o

1 Like