Forward Prop in a single layer (confused)

In the section of videos: " Neural network implementation in Python" I do not understand around 3:39 of the video “Forward prop in a single layer” why:

w1_1 = np.array([1,2]) what does the 1,2 represent? is that arbitrary? shouldn’t it be scalar because it got the info from the first layer(x)?

b1_1 = np.array([-1]) Why -1??

w1_2 = np.array([-3,4]) Arbitrary? explain.
b1_2 = np.array([1]) why not -1?? why 1?


w2_1 = np.array([-7,8,9]) yes 3 elements but are those numbers random? arbitrary?

The numbers given in the video are arbitrary examples. They are chosen at random, and are just there to help demonstrate how the math works.

For a real model, these numbers (weights and biases) are not arbitrary. They are modified during training, with the goal of minimizing the loss/cost formula.

The weights (w1_1, w1_2, etc) are normally not scalar (it can either be a vector or a matrix, depending on whether you are using vectorization).

Every layer outputs a vector of activations. In each neuron of the next layer, the dot product is applied between the weights of the neuron and activations vector of the previous layer.

Your explanation is crystal clear. No more questions on the matter.