Bug in code? M3_UGL_1 Module 3?

Pull out the specific tool metadata from the response

tool_call = response.choices[0].message.tool_calls[0]
args = json.loads(tool_call.function.arguments)

# Run the tool locally
tool_result = get_current_time()

Above line seems incorrect/hard coded?
How will know the function name up front?

1 Like

can I know the concern or issue you encountered with this code?? if you have faced any error, then please share a screenshot of the error.

The above code might be hard-coded based on the metadata. if your concern is with the tool call function.argument, then remember response choice seems to be concatenated accordingly with its respective message using the tool call argument.

1 Like

I think you’re right. It was hard coded.

1 Like

@lukmanaj Thanks for confirming that the code has a bug. With the function name being hard coded.

Not a big deal, but maybe deeplearning.ai team can add a line somewhere stating that it is bug.

That way other students can benefit.

Hi there,

Here is the explanation of the bug,

How does the code know ‘get_current_time()” would be the tool call needed, at code time?

That’s the hard coding. As also confirmed by @lukmanaj below.

HTH

I already know it is hard-coded :grin: but I didn’t get your query related to bug? hard-coding isn’t bug in code but a way it is implemented to execute a particular code or function recall.

That’s not bug and that’s why it is not mentioned in the instructions. Remember there are always different ways to execute a recall function, a programmer need to find themselves based on instructions provided to execute the code directly or use the recall function.

Here is a screenshot from course video where you will know how get current time needs to be implemented

1 Like

HI Deepti

I think it is wrong code(hence a bug). I’ve messaged Andrew also. So we can await.

My reasoning:

At coding(compile) time, they should not be hard coding it. Instead, use the function name that is output to execute the function at run time.

And the code, in its current form maybe misleading the student.

regards

It is not a mandatory step to use a function recall at compile statement code and here the hard-coding is done more because the agents are designed in way so tools are used to execute the get current time whenever the LLM is asked about the current time and at that time

tools=[get current time], uses the functional recall (you want it to directly used in compile statement) to get the current time.

I stand by my statement that it’s not a big but a different way of code implementation.

Also remember here in agentic AI, It is more about interaction between LLMs as agentic ai tools reflect, plan, use the tool, create a new set of code to get the desired output. I never stated what you are stating is wrong but that is not bug.

Good luck!!!

Also probably Prof Andrew won’t be able to reply, I am tagging learning technologist of this course, so she can clear your doubt.

@lesly.zerna can you please confirm and clear the learner’s doubt on this query.

2 Likes

Hi Deepti

Thanks for tagging the learning technologist.

best regards

Guna

1 Like

I need to clarify. The fact that it was hard coded doesn’t make it a bug/wrong. This is for illustration to show how tool calling works. At this point in the notebook, that was the only function/tool. And you actually need to feed the result of the function call back to the model. I think the confusing part is the fact that the tool_call and args were defined and they were not used.

The expectation is to look up the tool name and then get the function and run it to get the result that can then be fed back to the model.

Before the code block, this was the markdown:

Notice that in the response you can see tool_calls under message. This response from the LLM is saying that the LLM now wants to call a tool, specifically, get_current_time. You can add some logic to handle this situation. Then pass that back to the model and get the final response.
Run the cell below to run the function locally and return it to the LLM and receive the final response.

This shows that it’s just to explain how tool calling works. If you have more tools (functions), then obviously there needs to be a logic to look up the tool name and then run the tool.

2 Likes

HI @lukmanaj

I am worried that the hard coding may confuse / mislead the learner.

And would rather not have it there. Especially since, almost never would this (kind of hard coding) be a valid piece of code.

Hence, your team may want to take a relook, ofc if you choose to.

Thanks again, for all the responses

regards

Guna

1 Like

@Gunaranjan_Pemmaraju

Please see the image below and read the two paragraphs I have pointed under header manually defining tools

it’s explaining clearly that how LLMs are given tools to execute a prompt, and then get a response. So the place where you’re pointing it is hard-coded is basically how llm is using tool to get response and not a bug.

Also read the first sentence that aisuite turns the function automatically into a tool for the LLM and how it does is the part tools and response interaction where the code is written in a way so LLM takes a prompt, then tool uses the response argument using the model, get current time and any other parameters.

Basically they have shared part of the metadata, when agentic ai tools interact in the given image.

Regards

DP

1 Like

Hi Deepti

Thanks

I am still not convinced that this hard coding is the right way. Especially in a Lab.

And, in my opinion, it may confuse the student.

Having said that, if you & rest of the deeplearning.ai team feels this is the right way to go, I am cool. So we can stop this thread.

Let me also thank you again for all the messages. Really appreciate your efforts.

regards

Guna

I am not defending the hard-coding the path is the only right way, actually I was explaining you with the image what is being explained here is interactions between LLMs and how tools are used in a LLM to get response (this response requires to be in the interactive form because agentic ai tool designing LLM reads a prompts, then tool call generates the response based on argument parameter provided. (Here developers trying to explain basically how tools use the function recall the way you want the response code to be.

And Guna, developers or team will surely look at the thread, don’t worry if I am incorrect I am happy to learn something new.

We are only discussing, so we are able to understand each other’s point of view.

Regards

DP

1 Like

Thank you everybody for the questions and for pointing out to this specific code in this lab. Indeed, we’re checking with the engineering team and we’ll add some fixes, updates will be reflected in your notebooks soon!

1 Like

Thanks @lesly.zerna

I am sure the fixes you add would help students even better.

By the way, I have completed the course, and found it absolutely fantastic.

Regards

Guna

3 Likes