I got the same output in UNQ_C7 as expected output but grader out put says mine incorrect.
UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
GRADED FUNCTION: viterbi_backward
def viterbi_backward(best_probs, best_paths, corpus, states):
‘’’
This function returns the best path.
'''
# Get the number of words in the corpus
# which is also the number of columns in best_probs, best_paths
m = best_paths.shape[1]
# Initialize array z, same length as the corpus
z = [None] * m
# Get the number of unique POS tags
num_tags = best_probs.shape[0]
# Initialize the best probability for the last word
best_prob_for_last_word = float('-inf')
# Initialize pred array, same length as corpus
pred = [None] * m
### START CODE HERE (Replace instances of 'None' with your code) ###
## Step 1 ##
# Go through each POS tag for the last word (last column of best_probs)
# in order to find the row (POS tag integer ID)
# with highest probability for the last word
for k in range(num_tags): # complete this line
# If the probability of POS tag at row k
# is better than the previously best probability for the last word:
if best_probs[k, m-1] >= best_prob_for_last_word: # complete this line
# Store the new best probability for the last word
best_prob_for_last_word = best_probs[k, m-1]
# Store the unique integer ID of the POS tag
# which is also the row number in best_probs
z[m - 1] = k
# Convert the last word's predicted POS tag
# from its unique integer ID into the string representation
# using the 'states' list
# store this in the 'pred' array for the last word
pred[m - 1] = states[z[m - 1]]
## Step 2 ##
# Find the best POS tags by walking backward through the best_paths
# From the last word in the corpus to the 0th word in the corpus
for i in range(m-1, 0, -1): # complete this line
# Retrieve the unique integer ID of
# the POS tag for the word at position 'i' in the corpus
pos_tag_for_word_i = best_paths[z[i], i]
# In best_paths, go to the row representing the POS tag of word i
# and the column representing the word's position in the corpus
# to retrieve the predicted POS for the word at position i-1 in the corpus
z[i - 1] = best_paths[pos_tag_for_word_i, i]
# Get the previous word's POS tag in string form
# Use the 'states' list,
# where the key is the unique integer ID of the POS tag,
# and the value is the string representation of that POS tag
pred[i - 1] = states[z[i-1]]
### END CODE HERE ###