Model selection with train/cross-val/test

when speaking about model selection with train/ cross-validation / test sets, Andrew says the following in this slide: " To help you decide how many layers do the neural network have and how many hidden units per layer should you have, you can then train all three of these models and end up with parameters w1, b1 for the first model, w2, b2 for the second model, and w3,b3 for the third model. You can then evaluate the neural networks performance using Jcv using your cross-validation set.

my question is when he says “end up with parameters w1, b1 for the first model, w2, b2 for the second model” am to use regularization when calculating w1, b1 and similarly w2, b2 from the training set? Also, does this mean if I have to find out w1 till w10, I’ll have to train on training set 10 times to get all the parameters for 10 different equations?

With w and b he refers to the matrices of weights and biases for each model, irrespective of using regularization or not. If you want to build and test 10 different models you will have to train eaxh one to get the weights and biases for it.