In trasfer learning, prof said “When you have a lot more data for Task A than Task B, transfer learning is effective.”
I am wondering if transfer learning can be useful when Task A and Task B both have a lot of data.
Transfer learning is good when you have somewhat similar tasks A and B. When you don’t have enough data of B, you can use data from A to “train the lower layers” of your model. (and example would be to use it as feature extractor). Then you can train the last few layer with the acutally B data.
But If you have a lot (enough) of data for task B, you can just make the model learn on them directly :). As low level functions will also be learnt directly on B data, that could be even better.
Thanks for your answer