Lab exercise

In the exercise of this lab, why we normalize again the user vector after we already put user features through neural network? I thought we normalize the user feature before we put them through neural netowork. Why do we need to normalize it again after neural network?

input_user = tf.keras.layers.Input(shape=(num_user_features))
vu = user_NN(input_user)
vu = tf.linalg.l2_normalize(vu, axis=1)

Hello, @flyunicorn,

That normalization is for a different purpose. We do so to remove the magnitudes of the user embedding and item embedding produced by neural networks so that, when the normalized embeddings are dotted together, we get a number that describes only how different their “orientations” is. Note the following formula for dot-product:

In short, we only want to know the orientation difference and the magnitude part will mess it up so we remove it.

Cheers,
Raymond

1 Like

when you say " are dotted together", it means dot product?

Yes, dot product, and it is the dot product you will find in the exercise 1. Look for the line that uses tf.keras.layers.Dot.

Cheers,
Raymond