In the exercise of this lab, why we normalize again the user vector after we already put user features through neural network? I thought we normalize the user feature before we put them through neural netowork. Why do we need to normalize it again after neural network?
input_user = tf.keras.layers.Input(shape=(num_user_features))
vu = user_NN(input_user)
vu = tf.linalg.l2_normalize(vu, axis=1)
That normalization is for a different purpose. We do so to remove the magnitudes of the user embedding and item embedding produced by neural networks so that, when the normalized embeddings are dotted together, we get a number that describes only how different their “orientations” is. Note the following formula for dot-product: