What is the difference with variables and parameters

Hi! I found that you use this terms separately. So after a bit of googling, I understood, that variables is something that is passing to function and parameters is near to term coefficient, which only helps to count function?

So, in the first example variable is “x”

And in second example variables for J are w,b, isn’t it? In the same time, when they were only parameters for f(x)?

I am python developer and a bit not understand your maths syntax :smiley:

Are f(x) and y from second screen something like global functions in python with bounded context to them w and b? But in the same time we pass this w and b manually to J?

Hello @someone555777!

For example, in f(x), x is variable (also input to function f and also called an argument). But in f(x) = wx + b, x represents input data (single value), and f(x) represents the output (predicted value) also denoted by \hat{y}. In this equation, w and b are parameters (or in math, you can say coefficient and intercept). Furthermore, J_{(w,b)} is a cost function at the given value of w and b. You may call them variable or any other name but it is not the input argument. One point to note here is that it is J_{(w,b)}, NOT J{(w,b)}. The second term can confuse someone that w and b are input arguments to function J but the correct denotation is J_{(w,b)} (subscript) and it means the value of cost function at given value of w and b.

So, in Figure one, x is the input data, and w and b are parameters. And in Figure 2, J_{(w,b)} is a cost function at the given value of w and b.

Is it clear now?

Best,
Saif.

2 Likes

In addition to @saifkhanengr’s great reply:

One more point, @someone555777: you will encounter many times in your future ML journey that a model is fitted (or a function parametrised): that means that the parameters of the model are determined via an optimization process (e.g. w/ gradient descent). Sometimes this process is also called „calibration“.

Anyhow: fitting a model is something fundamental that you will do very often in machine learning and in data science.

Side note: In deep neural networks with modern transformer architectures, we can even have up to billions (175 billion parameters in GPT3) or even one trillion of parameters (like in current state of the art models such as GPT4).

Best regards
Christian

2 Likes