Questions about Finetuning LLMs Multi-Task and Inference time

In one of my tests, i was asked to answer to talk about the inference time and multi-task finetuining. I think that it may lead to slower inference, because some finetuning tecniques like adding new layers increase the size of the model and it can reduce the time of the inference. Is my reasoning ok? There is a question about it, and if my thought is correct, the question needs to be corrected.

It would be better if you post your question on the forum area for that course.
The course mentors don’t really monitor the “AI Questions” forum very much.

The course forums are in the “Course Q&A” area.