Hello there!
I am a beginner in Python, then I have a very basic question in gradient descent function code. I am going to know what are the meaning of

why J_history.append( cost_function(x, y, w , b)) and p_history.append([w,b]) are written in the for loop in gradient descent function? if they are out of the for loop, what’s happend?
Thank you for your help in advance!

If you are taking about optional lab for gradient descent then,

in Iteration {i:4} - {i:4} styles your output (it atleast prints 4 spaces even if there is nothing to print for y), try removing :4 and rerun your code, you will find the difference in the output style.
while J_history[-1]:0.2e is used to display last element of J_history array upto second decimal poin, in terms of power of e.

dj_dw: {dj_dw:0.3e) is used to display three dj_dw upto three decimal points, in terms of power of e.

{w_final:8.4f} for printing at least 8 spaces(again for styling like in 1), if you understood 1, then you can understand it easily) and upto 4 decimal points.

J_history and p_history are written inside the loop to add error and weights calculated at every iteration of gradient descent algorithm. If you only add final values of cost and weights, then you cannot visualise how cost function changed with weights or number of iterations.