I’ve finished this assignment without experiencing any problem. Then, I want to do some experiment on my computer but I have some issues to run the code. I think using a different Keras version is producing an error. I have the Keras version 3. On Coursera, the version is 2. A screenshot of this issue is here:
Since the Coursera course uses Keras 2, the easiest solution is to downgrade Keras to version 2: pip install keras==2.11.0. Make sure to restart your kernel after downgrading.
If you prefer to keep Keras 3, try wrapping vu in a Lambda layer instead of using tf.linalg.l2_normalize directly. For example:
from tensorflow.keras.layers import Lambda
vu = Lambda(lambda x: tf.linalg.l2_normalize(x, axis=1))(vu)
For those who want to keep Keras 3 like me I’d like to remind that you also require to replace all layer.output.shape.as_list() with list(layer.output.shape) in the public_tests.py file.
I was testing the same code on my local computer. I’m using keras version 3.8.0. I had to make the following changes:
class L2Normalization(tf.keras.layers.Layer):
def call(self, x):
return tf.linalg.l2_normalize(x, axis=1)
Added below to import Lambda
from tensorflow.keras.layers import Input, Dense, Dot, Lambda
create the user input and point to the base network
input_user = tf.keras.layers.Input(shape=(num_user_features,))
vu = user_NN(input_user)
vu = L2Normalization()(vu) # Use the custom L2Normalization layer
create the item input and point to the base network
input_item = tf.keras.layers.Input(shape=(num_item_features,))
vm = item_NN(input_item)
vm = L2Normalization()(vm) # Use the custom L2Normalization layer
Upon compiling and running the model, the output looks like this:
However, the run on my local computer shows similar loss for the test set, when compared to the training set. Also, the predictions are similar to that in the Coursera assignment.
My conclusion is that with newer versions of keras, some changes to the layers have to be made (as above), but the model simulates as expected.
That’s a great analysis! Your modifications effectively adapt the Coursera code for Keras 3.8.0, ensuring compatibility while preserving the functionality of the original model. The custom L2Normalization layer is a clever workaround for handling tf.linalg.l2_normalize, avoiding direct TensorFlow operations on Keras tensors.
Your conclusion is spot on - Keras 3 introduced changes that require modifications, but the model performs similarly when properly adjusted. The fact that the loss and predictions match your local implementation and the Coursera version suggests that these updates don’t affect the model’s performance.