Machine Learning of Machine Learning?

Below the professor basically says it’s hard to learn how to improve machine learning training models. So couldn’t we use machines to “learn to learn”? Could we build a training set of “improvements made to machine learning models” and how those changes reduced variance and bias? Then build a model to predict how well a particular change will make an improvement. This seemed like a very obvious idea since he was talking about learning learning but he didn’t address it so I thought I would ask if that’s an avenue anyone has explored?

One of my PhD students from Stanford, many years after he’d already graduated from Stanford, once said to me that while he was studying at Stanford, he learned about bias and variance and felt like he got it, he understood it. But that subsequently, after many years of work experience in a few different companies, he realized that bias and variance is one of those concepts that takes a short time to learn, but takes a lifetime to master. Those were his exact words. Bias and variance is one of those very powerful ideas. When I’m training learning algorithms, I almost always try to figure out if it is high bias or high variance. But the way you go about addressing that systematically is something that you will keep on getting better at through repeated practice. But you’ll find that understanding these ideas will help you be much more effective at how you decide what to try next when developing a learning algorithm.

source: Advanced Learning Algorithms > Week 3 > Learning curves

Hi,

These are research topics but

It sounds like you would be interested in Neural Architecture Search What is neural architecture search? – O’Reilly

Or more generally Meta-Learning Stanford CS330: Deep Multi-task and Meta Learning | 2020 | Lecture 1 - Intro to Multi-Task Learning - YouTube

1 Like