Annotations of trains and features

Hi there!
As i remember back in the deep learning course Andrew told us it’s better to have the i in table for the features and j for the number of training sets but in this course he change the annotations and swapped the i with j using i for the training sets and j for the features.
Is there any reason to this?
thanks for your lovely support :slight_smile:

Hi Arash!,
I believe the reason behind this choice is to ease the understanding of the back and forward propagation implementation in neural networks by using a non-fully vectorized implementation (use of loops in training examples). On the other hand, when we want to optimize the execution time (full vectorized implementation), it would be easier to use the adopted convention in the DLS.

Hi Arash,

I believe there is no rule as for which variable is used for iterating over samples or features. I think the best way is always to go back to the for-statment to check for the meaning of the iterating variable instead of presuming or assuming the meaning of, for example, i or j.

Although this also isn’t a rule, I sometimes see people use i in the first, outer-most loop, then j in the next inner loop, and then k.


PS: Rather than a rule of thumb for i and j, we are more used to use m to stand for the number of samples and n for the number of features, so that we may use for i in range(m) to go over all the samples.

1 Like

Hi Arash,

I don’t recall seeing this in the old course. Are you talking about this one: Machine Learning? Looking at the lecture notes, I am seeing the index j corresponding to features. This comes from the typical notation in linear algebra where a matrix A has entries A_{i,j} where the 1st index corresponds to rows and the 2nd index corresponds to columns, and is usually pretty consistent throughout the different books and courses on applied linear algebra or machine learning.


Hi again
By course i meant the Neural network and Deep learning specialization not the old version of machine learning course.
thanks for your response

1 Like