C2_M2_Lab_4_transfer_learning Suspected Incorrect Statement

Hello,

In the “Transfer LearningStrategy 1: Feature Extraction (Train Only the New Head)Modular Block” segment there is a statement of the lab:

image

but if you check, you will see that the entire classifier block will be retrained during fine-tuning, not just the last (new) layer:

Is my understanding correct, or am I missing something?

Thank you!

1 Like

Hi @DAResaid,

Thank you for flagging this! Your understanding is absolutely correct.

The terminology in the original text is misleading. When referring to “this new layer,” it incorrectly implies that only the replaced final layer (classifier[-1]) would train. In reality, since the code only freezes the features backbone, the entire classifier block (all its layers) remains trainable during training.

I shall update the notebook text soon.

Your keen observations throughout has been invaluable in helping me improve the content. I truly appreciate you taking the time to point these out!

Best,
Mubsi

3 Likes

Hi @Mubsi,

I would like to mention another very minor glitch in the first lab (C2_M3_Lab_1_basic_tokenization) of the third module of the course which is not worth creating a new topic for. There is some naming inconsistency:encoded_inputs_auto is mentioned in the markdown cell, but in the code cell it’s still encoded_inputs as it was in the previous segment of the lab (please, see the attached screenshot)

Thank you!

Hi @DAResaid,

Since your original topic was about C2 M2 Lab 4, it is definitely needed that you create a new topic for anything that is related to any other lab, especially C2 M3 Lab 1.

This way, a particular topic remains about a particular issue, and not starts to mix things up

Thanks,
Mubsi

1 Like