C4W1 Exercise 3 tf.nn.log_softmax leads to TypeSpec Error. But, test cases pass!

C4W1 Exercise 3:
I am getting below error when using tf.nn.log_softmax in the Decoder. Weird is that all the test case pass and the implementation check fails.

TypeError: Could not build a TypeSpec for KerasTensor(type_spec=TensorSpec(shape=(64, 15, 12000), dtype=tf.float32, name=None), name='dense/LogSoftmax:0', description="created by layer 'dense'") of unsupported type <class 'keras.src.engine.keras_tensor.KerasTensor'>.

Could someone please help me resolving the error?

@Amitabh_G Hmm… Well it is coming out the right shape, but not the right type.

I am a little curious about how/why you are using tf.nn.log_softmax ?

Keep in mind the definitions of the layers are defined/outlined in the first half of the object declaration.

Not to say ‘too much’, but you should be able to do something simple like

activation='log_softmax'

if you can figure out the right place to put it (an exercise up to you).

1 Like

Thanks for the tip @Nevermnd , but I did try what you have suggested before and tried it right now as well. But, I am getting the same error again. btw, I did some googling and found these relevant links:
https://stackoveflow.com/questions/65383964/typeerror-could-not-build-a-typespec-with-type-kerastensor

Going through these, it seems a backend issue but can’t say for sure. Need to dig deeper. Thanks again!

@Amitabh_G okay, by way of advice as memory serves me this last class is suddenly heavily object oriented, much more so than any of the earlier ones.

So for example you want to get used to doing:

self.output_layer type calls rather than trying to implement the sequential API.

Unfortunately I have not setup Tensorflow to work with my local Juypter notebooks yet (and actually have to figure out how I do that), or I’d run a type test on it for you and at least tell you what it should be.

Personally I am leaning more towards the end you might have a little error in your code more than a backend issue. Perhaps a mentor that has these notebooks active can give you more input.

hi @Amitabh_G

You are suppose to use tf.nn.log_softmwx that is correct

The issue is not with that. you are probably not passing the right argument somewhere in your codes, so either share a complete screenshot of your error or

refer the below link to find the solution.

Also remember to provide a complete screenshot of the error,.if it is lengthy take two separate screenshots as that gives information on why you got the error.

please make sure not to post codes here.

Regards
DP

Yes, the assignment is in the object oriented fashion now and was not the case before. The class uses the functional API of tensorflow now and doesnt use the sequential API. As you suggested I have taken care in implementing the classes properly. I will surely recheck my code again…best case scenario is that I find the error in the code and resolve it. Thanks @Nevermnd for your help!

1 Like

Sure, @Deepti_Prasad here is the full traceback:

@Amitabh_G Ha ! Hmm… That traceback gives one… basically nothing to go on. It fails basically immediately.

Thinking…

Your codes are incorrect. Read the instructions carefully, also remember each grade cell are assigned with arguments and you are suppose to use those arguments.

another mistake I noticed, for recalling decoder from previous DecoderLayer, you are not suppose to use DECODER, that is incorrect. Remember those grade cell comes init and call functions.

So for this cell when you are using decoder, it should be recalled as self.decoder.

But I suspect more mistakes as I can see you are incorrect recall variables (global/local) Capital VOCAB_SIZE and UNIT is global variable here.

@Deepti_Prasad he is not the one calling Decoder, what you see is in the given code for the test function.

@Amitabh_G though DP is correct, make sure you are using the right value for ‘units’ in your dense layer.

I know that @Nevermnd what he is using and didn’t write that DECODER.

but letting him find himself if those are the mistakes he has done.

@Deepti_Prasad thanks for your inputs. I have not modified or changed the arguments in the Graded cell. The cell in question is the implementation check cell. So, it is frozen. I have noticed the Capital VOCAB_SIZE and UNITS and those are not used as arguments in the definitions.
@Nevermnd Sure, The units in the dense layer is the vocab size.

@Amitabh_G you are correct there then. Just for the sake of it try restarting the kernel and run everything again.

Otherwise DM @Deepti_Prasad your codes if that does not work.

I am not stating about test cell argument @Amitabh_G

when the test cell showed DECODER was used to the decoder, the error was thrown. So his call arguments needs to be checked upon.

You can send screenshot of codes by personal DM.

Click on my name and then message

Ok, I have sent you a DM with the Decoder class implementation. Please have a look. Thanks!

Hi @Amitabh_G

for Grade Function Decoder class

in the call section,

for code line

Get the embedding for the input, you are using incorrect input, remember again as mentioned before you are suppose to recall codes as per assigned arguments. So if you check in the call section your input would be target and not the way you used to keras input shape for call decoder codes

Regards
DP

Thanks @Deepti_Prasad, that works! I realize the mistake now. @Nevermnd I was passing the shape to the Embedding layer but not the actual input.

1 Like