Help with Newton's Method for two variables video

Around 1:15 in “Newton’s Method for two variables”, it says we can see that Newton’s method, given the formula, can generalise to any n number of variables, with the following dimensions:

But how can we do vector subtraction with a vector of n x 1 and another vector of m x 1?

1 Like

The best way to understand it is by coding it. So, please go to lab 3 and you will find the implementation in this line:


Then, you will be able to see the dimension analysis correctly by printing the shape of each array or by debugging on python.

2 Likes

Ok I will do, thank you.

1 Like

It is my pleasure brother!

1 Like

Hi @Wenxin_Liu

to be more graphic:
In a 1D case the gradient tangent line (vector representing just the slope) is used to find the next iteration w/ the (scalar) hessian as second derivative.
But in a 2D (or n-Dimensional) case the gradient tangent plane (or an n-D object/hyperplane) spanned by vectors determines the next iteration w/ the hessian matrix so that you can get to the optimum in an efficient way.

Hope that helps!

Best regards
Christian

In your specific screenshot:

  • I understand that the dimensions of H^{-1} are named m and n.
  • you have a 2D optimization case.

My take: we are talking about the subtraction of 2D vectors which match well since the hessian matrix is always a square matrix and so is its inverse (if it exists). So to stay in the convention of the screenshot: n = m . This applies not only in 2D but in general.

In general you can also find a good summary here for both 1D and also higher dimensional cases, @Wenxin_Liu.

Please let us know if anything is open.

Best regards
Christian

Thank you so much @Christian_Simonis , I appreciate your help!

1 Like

Sure! My pleasure, @Wenxin_Liu.

Happy learning!

Best regards
Christian