In ‘Establish a baseline’ Lecture (which is part of Week 2 of Course 1) Professor.Ng has used the term Baseline model with two different connotations. Usually we start with a first cut simple model and call it baseline model and improve upon it (As an example in forecasting we may start with a naive, random walk or seasonal naive as baseline and then move on to using more sophisticated models) however if we call Human Level Performance in the case of Speech Recognition as baseline model wouldn’t it be counter intuitive. Shouldn’t the HLP in this case be instead called a Benchmark. So we start with Baseline model and try to achieve Benchmark performance?
Quote from the video:
What a baseline system or a baseline level of performance does is it helps to indicate what might be possible.
The way I understand the video is that when you’re asked to solve a new Machine Learning problem, the first thing to do is establish a baseline (for instance it might be from literature review on similar problems, a HLP for unstructured data, or a simple model as you mentioned for structured data). Then you can say you’re confident you can solve the problem with approximately 80% accuracy (baseline performance) for example, and start working to improve further. On the contrary, if you find that in literature the best performance is 50% (HLP or not), you avoid to over-promise on what you can realize.
It’s like doing a quick feasibility study, before spending a large amount of time of developing models and doing benchmarks and realizing at the end that you can’t solve the problem with the required accuracy.
Hope this helps,