C3_W3_Exercise 04_Incorrect accuracy and confusion matrix

Hello.

I have been stuck on exercise 4 of the week 3 question duplicates assignment.

The issue that I am facing is that my classify function outputs the incorrect accuracy and confusion matrix. All of the unit tests pass for exercises 1-3 and I’ve gone through all of my code to find a possible source of the issue, but there are none I am aware of.

20/20 [==============================] - 0s 7ms/step
Accuracy 0.3623046875
Confusion matrix:
[[ 585  733]
 [5797 3125]]

I suspect there is a simple error in my classify function, but even after looking through the community I am unable to find what the issue may be. Here’s a walkthrough of each step in my solution:

  1. Split the model’s output into v1 and v2.
  2. Compute d with tf.math.reduce_sum of the similarity between v2 and v1 using tf.matmul.
  3. Compute y_pred as where d is greater than threshold and cast to float.
  4. Calculate correct predictions with y_pred == y_test and casting the result to float, then calculating accuracy using tf.math.reduce_mean.
  5. Create the confusion matrix using y_pred and y_test

Any help would be appreciated!

Check if this helps you

Also there many similar threads to your query, use the search option to find them

Thank you for the recommendation.

I already implemented this solution. I use n_feat to reshape v1 and v2 to (10240, 128).

I do suspect the issue is with the classify function. The loss values output by train_model seem to match what others in the community seem to have:

Epoch 1/2
349/349 [==============================] - 33s 83ms/step - loss: 127.0940 - val_loss: 126.8325
Epoch 2/2
349/349 [==============================] - 9s 24ms/step - loss: 126.6022 - val_loss: 126.3758

The output of model.summary() also matches the expected output in the notebook.

Could there be other problems that I could be missing?

you can send screenshot of the classify function grade cell codes by personal Dm

Extremely sorry @reyaes I missed your DM reply between other notifications

Correction required

  1. In GRADE FUNCTION Siamese
    code line
    Add the normalizing layer using the Lambda function
    you need to add axis argument too for the correction dimensionality
    In the same grade cell, for input1 and input2, mention the shape as a tuple(1)

  2. GRADE FUNCTION TripletLossFn
    your codes are quite right, but while calculating tripletloss1 and triplet loss2, you are missing a tupe again for the calculation to work upon. instruction mentions
    subtract positive from margin and add `closest_negative, so it should be placed as
    (0, (margin-positive)+closest_negative)) same for tripletloss2

  3. GRADE FUNCTION train_model, in the model argument, you aren’t suppose to use len function. text_vectorizer is your function key and vocabulary_size is the size of your vocab for vocab_size in the model code. You don’t require get.vocabulary()

when you do these corrections, first clear out the kernel output, then restart and re-run each cell as you make these corrections for successful resolution of your issue. let me know if you are still encountering any issue.

No worries about the late reply. Thank you for the guidance.

  1. I did as you recommended to set the axis argument of the normalize function. I tried axis=0, axis=1, and axis=-1.
  2. I also changed the input shapes to (1,) using tuple.
  3. I added the tuples as you recommended for the triplet loss calculation.
  4. I made the change to the train_model function to not use the len function.

However my classify function still outputs the wrong result.

I also tried deleting the notebook and getting the latest version to start over, and rebooting the server also did not work.

Interestingly, the loss values from calling model.fit now seem correct.

Epoch 1/2
349/349 [==============================] - 33s 83ms/step - loss: 30.0774 - val_loss: 12.0250
Epoch 2/2
349/349 [==============================] - 8s 24ms/step - loss: 8.5000 - val_loss: 9.2201

Could there be issues with my notebook instance that Coursera is running? I could contact Coursera support, but I am not sure what else could be the issue.

keep this (1)

Also 3 point does mentions just len function issue, change the get.vocabylary() to text_vectorizer.vocabulary_size

I found the source of the error was indeed in the classify function. I was using tf.linalg.matmul instead of element-wise multiplication.

I was confused by this because we use tf.linalg.matmul to calculate similarity in the loss function, so I thought I had to do the same in the classify function. The instructions do say “Multiply v1 and v2 element-wise,” which I overlooked.

I hope others with the same issue can find this solution helpful.