Week 3 - Neural Machine Translation ex - attention map is not interpretable

It has been a great programming exercise. I am very interested in the plot of attention weights. However, the plot produced by plot_attention_map() is not fully interpretable as the the one in Figure 8. In particular, the input characters Tuesday are assigned high attention weights to the output month 10. Is it a normal observation, or did I make something wrong?

Figure 8 is a depiction of the predicted outputs - it’s only the label with the highest value.
The plot in the exercise gives the values of all of the outputs.