Thank you for flagging this! Your understanding is absolutely correct.
The terminology in the original text is misleading. When referring to “this new layer,” it incorrectly implies that only the replaced final layer (classifier[-1]) would train. In reality, since the code only freezes the features backbone, the entire classifier block (all its layers) remains trainable during training.
I shall update the notebook text soon.
Your keen observations throughout has been invaluable in helping me improve the content. I truly appreciate you taking the time to point these out!
I would like to mention another veryminor glitch in the first lab (C2_M3_Lab_1_basic_tokenization) of the third module of the course which is not worth creating a new topic for. There is some naming inconsistency:encoded_inputs_auto is mentioned in the markdown cell, but in the code cell it’s still encoded_inputs as it was in the previous segment of the lab (please, see the attached screenshot)
Since your original topic was about C2 M2 Lab 4, it is definitely needed that you create a new topic for anything that is related to any other lab, especially C2 M3 Lab 1.
This way, a particular topic remains about a particular issue, and not starts to mix things up