Hi @SAM1988 , as I know, the parameters are initialized randomly, and we need to update them gradually so that the loss function gets as closer as its minimum value as possible.
Hello Sam @SAM1988, they didn’t explain that, but they could have run gradient descent behind the scene and then just put down the weights that are midway to the optimal.
Yes, every week starting from Course 1 Week 2 will have a practice lab at the end of the week. Course 1 Week 1 is a warm-up