Hi Raymond,

in your reply above, you say "if a[2] has 15 units, it means you have 15 units in layer 2, which means you have w2_1, w2_2, w2_3…**w2_25**, and every one of them is a vector of 25 weights (*I have used the notation w2_1 to represent w with superscript [2] and subscript 1*).

Should this not run to **w2_15** rather than w2_25? Please could you explain if I have misunderstood?

In Neural Network Implementation (MLS course 2 week 1) in Python, Prof Ng’s discussion is very helpful, though because it uses a very simple example with X only having a single input example of n=2 features, I am not certain how this generalises.

Can generalisation be summarised as follows:

if **X** is a 2D array/matrix of *m* input examples each of *n* features,

i. for each neuron/unit in the first layer, there will be a vector/1D array consisting of *n* x w values, and a separate 1D vector **b**

iii. for the layer as a whole, these neuron/unit **w** vectors are combined/concatenated into **W**, which has dimensions *n* x the number of neurons/units.

iii. the output of each neuron in a layer will be a single value between 0 and 1 (post sigmoid function),

iv. for each layer, the output (**a[n]**) will consist of a vector/1D array with dimensions equal to 1 x the number of neurons in the layer.

v. for layers 2 and onwards, each neuron in the layer will receive the preceding a vector, of dimensions 1 x (no. of neurons in previous layer). So in these layers, each neuron will also have a vector/1D array of *n* x w values, and a separate 1D vector **b**.

P.S. I’d be grateful if you could direct me to where to find out how to write with superscript/subscript text and scientific notation in this forum.

Jem