C5W4 - A1 - Having unusual "'Transformer' object has no attribute 'target_vocab_size'" error

Whew !

So almost finished with this specialization-- But I am getting a strange error with the final grader and I’m not sure what (or where for that matter) the problem could be (?)

So I pass all the unit tests fine, but the final grader is throwing me this message on the grading for the transformer model (Q8):

In full, this says:

Code Cell UNQ_C8: Unexpected error (AttributeError("'Transformer' object has no attribute 'target_vocab_size'")) occurred during function check. We expected function `Transformer` to return type <class 'tuple'>. Please check that this function is defined properly. 
If you see many functions being marked as incorrect, try to trace back your steps & identify if there is an incorrect function that is being used in other steps.
This dependency may be the cause of the errors.

I already tried searching and could not seem to find anyone with this specific problem…

I also checked my shape of dec_output and I think that is okay:

I’m really not sure where to start/what could be wrong…

** Oh-- I just wanted to ask… when we complete the specialization do we lose access to the ungraded assignments as well ? Or just the graded ones. Factoring this in before I take the last quiz.

It is safe to assume you lose access to all of the notebooks.

When you call self.final_layer(…), you only need to specify dec_output, not the vocab size.

@TMosh… Yes, that is what I am doing…; Why I’m finding this confusing. We don’t even have access to that variable in the final layer.

Please check your personal messages for instructions.

I think you have until your next payment period expires, but you should assume you lose access to all assignments and quizzes. If you want to save any of the assignments, better do it now. This is discussed on the DLS FAQ Thread.

1 Like

Congrats on completing this specialization. Big achievement :tada: :tada:

1 Like

!

So @TMosh, honestly I’m still not exactly sure…

I took your advice and made sure to use tf.math rather than np functions in my Encoder function (to be honest, seeing as I’d never worked with TF before this specialization [and I realize it was not intended to be a through introduction to TF/Keras], I still feel a little confused how sometimes you still can/do use Numpy functions, and other times it is a big ‘no, no’]), then ran it through the grader again…

100% ! I thought, ‘Ah-ha ! That must have been it !’

But just to be 100% confident/sure, I reverted back to my Numpy function and ran it through the grader again and…

Still 100% !?! (and no error as stated here).

I’m not sure if the grader just will always default to your highest grade and then just ignore any other result, or maybe at the time something was going funky on the backend (?).

Don’t know-- But I’ll make sure to just stick with tf.math functions from now on.

The grader gives you a score on whatever the current code is that you submit, but for the purposes of the overall grade (shown at the top of that My Submission page) it is smart enough to remember your best score of all. But there are several ways this can produce a confusing result:

  1. You typed new code but didn’t click “Shift-Enter” on the function cell, so the tests in the notebook are still running the previous code as is the grader.
  2. You didn’t “Save” before you submitted, so the grader still seeing some earlier version of the code. Some of the earlier notebooks do an autosave, but I think some of the C4 and C5 ones do not. The best idea is to play it safe and always click “Save” before submitting to the grader. Not Checkpoint, but actual “Save”.

It is possible to mix numpy and TF operations, but you have to be careful. The really critical case in which you can’t is if the function in question is part of the compute graph for which you need to compute gradients. In that graph, every step must be a TF function because the gradients are computed by TF. If you put a numpy function in the chain, then you break the graph and get no gradients. But you can use numpy or straight python for scalar computations and the like.

1 Like

Thanks for the suggestion Paul, I was not aware a save/run was required. If anything I presumed initiating a submission would basically save/access the current state on its own.

In any case… I tried it again this morning, and still couldn’t replicate the error I was getting yesterday, either with the original or very very slightly modified function call (Numpy Vs TF).

:man_shrugging: However, it works now.

Thanks again @Tmosher for peaking at my code.

1 Like