[Week 3] Loading pre-trained model for machine translation gives very bad results

In machine translation assignment I have passed all the tests and everything seems to be working well. But when I load pretrained model, this is the output I get:

I was able to successfully submit the assignment and complete it. It seems either I implemented something wrong that wasn’t checked in the test-cases or the pre-trained weights it loads are not good.

So it seems after training my model myself for 100 epochs I get better results. The results are still bad but atleast they are far better than when loading the pre-trained weights which probably indicates there’s problem with the pre-trained weights.
Results I got after 100 epochs:

source: 3 May 1979
output: 1999-05-05 

source: 5 April 09
output: 1994-09-09 

source: 21th of August 2016
output: 2006-08-20 

source: Tue 10 Jul 2007
output: 2007-07-20 

source: Saturday May 9 2018
output: 2009-09-19 

source: March 3 2001
output: 2003-03-13 

source: March 3rd 2001
output: 2013-03-13 

source: 1 March 2001
output: 2013-00-10 

I think the pre-trained weights work fine. Here are the results I got:

Then ig there should be problem with my code. But all the test-cases pass and on training myself the model gets better, so I don’t know how to debug the problem.

This may come a bit late but maybe useful for others running into this problem (like myself). The problem is in the for loop in modelf(). Note that your output is always a static number (just ones or just twos). So look at why you might predict always the same number in the for loop.