A Thought on Why Some Parts of the Course Might Feel Confusing (Notation, Shifting Meanings, and Unclear Variables)

Hey everyone,

As I’ve been working through the course — especially the early lessons on linear regression — I keep running into something that feels off, and I wanted to bring it up in case anyone else is thinking the same thing.

The course is teaching both ‘machine learning logic’ and using ‘graphs/geometry to visualize it’, which totally makes sense — but sometimes it feels like those two worlds are blending together without explanation. More specifically:

:repeat_button: 1. The same symbols are used in different ways… without warning.

We see:

  • y as the true label
  • ŷ (y-hat) as the prediction
  • f(x) as the function the model uses to make predictions

But then:

  • y is sometimes used to label the entire line in the graph
  • ŷ kind of disappears — even though it’s technically what the model actually outputs
  • And f(x) becomes the visual curve, the model process, or sometimes just a placeholder for ŷ

So depending on the moment, the same symbol means a value, a shape, or a process — and the course doesn’t always make it clear when that switch happens.

:puzzle_piece: 2. Variables just… appear.

Another thing I’ve noticed is that new variables are sometimes introduced without being clearly defined up front. For example:

  • w is said to be a “parameter” — but what is it really? A slope? A control dial?
  • b is a “bias” — but that word has different meanings depending on your background (math, stats, AI, etc.)
  • ŷ is introduced in passing, then quietly used as if everyone knows exactly how it fits

It’s like we’re reading a story where characters show up halfway through a scene without an introduction — and we’re supposed to just know who they are and what they do.

:globe_showing_europe_africa: Why this matters

If you’re someone who likes clarity — or comes from a physics, math, or coding background — this can throw you off. It’s not that the ideas are too hard. It’s that the rules of the system keep shifting, and new parts are added without enough structure.

That can make you second-guess yourself, or worse, internalize sloppy definitions without realizing it — which is dangerous when these concepts get more complex down the line.

:light_bulb: What might help

I think it would really help if the course said something like:

“Now we’re shifting from a model-building view (where x, y, ŷ, w, b are part of a computation)
to a graphing view (where those same symbols represent shapes or labels).
And here’s what each variable means — not just technically, but intuitively.”

:thinking: Has anyone else felt this?

Have you noticed symbols being used without clear definition? Or terms getting reused without warning?

Would love to hear how you’re navigating it — or what helped make it click.

— Daniel

1 Like

Hi @DanielSlack,

One example of notation abuse that I’m aware of is in Week 3, where the course uses the symbol {\hat y} in a slightly misleading way. Specifically, {\hat y} is shown outputting 0 or 1, when in other literature {\hat y} should represent the probability that y = 1, not the final classification itself. Using w for weights and b for bias (model parameters) is pretty standard.

I do not disagree with you.

There’s lots of notation abuse. Machine Learning isn’t an old enough practice to have rigorous and universal nomenclature.

You’ll learn more if you don’t force machine learning to fit within your expectations.

Also note, that you’re enrolled in a beginner-level course, and it makes no big assumptions about the learner’s academic credentials or experience.

Andrew Ng tends to take this to account by lecturing in a very intuitive manner, without any proofs or derivations.

These are not rigorous math courses.

1 Like