# What does n_a refer to? (Building your Recurrent Neural Network)

S​imple question: the parameter Wax is described as: Weight matrix multiplying the input, numpy array of shape (n_a, n_x).

I​t seems that n_x is the number of training examples, i.e. sentences. But what is n_a? Number of neurons?

I know this was addressed in a previous course, but I’m not sure where.

n_a is the number of activation units.

It’s been a while since I took the earlier courses, do you happen to remember which lesson explains the concept of activation units? Like what does that mean in a practical sense?

Activation units are fundamental to any NN. It’s probably early in Course 1.

I don’t want to sound ungrateful, but this answer isn’t very helpful. There is nothing called a activation unit mentioned at all in the first course and I’m pretty sure it would also not be in the second. Perhaps the fourth? Googling this term also only brings up only content related about activation functions. Does it refer to the number of classes perhaps?

I may be wrong but I think Andrew mentions it in C5, week 1, video “Different Types of RNNs”. He says at approx 2:05: “I was drawing a bunch of circles here to denote neurons”. He somehow alludes that - at one time t - you actually have more than one activation unit (neuron) for which you compute the activation function value.

The activations from those units (neurons) get multiplied as follows: Wya * a_next so that each neuron’s activation contributes to each word’s “probability” computed for this word by the softmax function.

I hope it helps. But please check on me. I may be wrong as I am only starting this course.