Cost function convex why gradient decent

can you explain to me why the second one has global minimum and why not just take the derivative why gradient decent, and what is the first one what an example of the function (in first one ) would be, both function have w and b and a J of w and b. i hope you understand my confusion :sweat_smile:

1 Like

The first example is for a cost function that is not convex.

The second example uses a convex cost function.

Whenever possible, we prefer to use convex cost functions.

Hello @Hamz,

Welcome to the community!

We learn gradient descent mainly for complex problem like the first one! Using gradient descent in a simple problem like the second one can let us easily validate the results with existing methods such as the normal equation which is briefly mentioned in Course 1 Week 2.

So, we want to practice gradient descent.

Cheers,
Raymond

Hello @Hamz,

I donโ€™t know the exact form to generate that particular graph.

A way you can have a non linear relationship between the cost and a pair of w and b is that, you can construct a multi-layer neural network and use some non-linear activations in between the layers. Then pick a w and a b from a neuron in the first layer, plot the cost against the picked w and b and finally you should be able to get a non-linear cost surface with multiple minimum.

However, multi-layer neural network isnโ€™t yet covered in the course one of the MLS, so please hold on to it for the time being, and verify this until you learn about deeper neural network.

Cheers,
Raymond

hi thanks a lot @rmwkwok

You are welcome, @Hamz