When I try to compute the accuracy with the Siamese model, I get an error that I don’t manage to solve (see trace below)
I suppose there is a discrepancy between the model I build and the shape of the weights in the pretrained model.
I tried to recharge the workbook several time, to no avail.
How can I investigate this error and find the problem ?
I didn’t find much usefull on the web
LayerError: Exception passing through layer Parallel (in pure_fn):
layer created in file [...]/<ipython-input-20-54e6716dd7ce>, line 29
layer input shapes: ShapeDtype{shape:(512, 64), dtype:int64}
File [...]/trax/layers/base.py, line 707, in __setattr__
super().__setattr__(attr, value)
File [...]/trax/layers/base.py, line 454, in weights
f'Number of weight elements ({len(weights)}) does not equal the
ValueError: Number of weight elements (512) does not equal the number of sublayers (2) in: Parallel_in2_out2[
Serial[
Embedding_41699_128
LSTM_128
Mean
Normalize
]
Serial[
Embedding_41699_128
LSTM_128
Mean
Normalize
]
]
I have run into the exact same issue. I think I have correctly used the “# use batch size chuncks of questions as Q1 & Q2 arguments of the data generator. e.g x[i:i + batch_size]”
Here is the data generator from my code:
q1, q2 = next(data_generator(test_Q1[i:i + batch_size], test_Q2[i:i + batch_size],pad =vocab[‘’], batch_size=batch_size,shuffle=False))
then when I call model(q1,q2), the same issue happens again.
Hint: use vocab['<PAD>'] for the pad argument of the data generator . This was the hint given to solve the question. You are using pad = vocab[‘’]. I guess using this hint you will be able to solve the question.
Thanks for the reply, but I was using vocab < PAD >. Sorry when I copy and paste it did not come over correctly.
This screenshot below is a test to replicate the issue I have and ran into the exact same issue.
I have this issue where I call next() on the generator to get q1 and q2 inputs, but when I call the model with q1 and q2 it fails.
It’s a shame there’s no single example how to run the trax’s Parallel model for inference. But @anchyzas gave the right answer, add both question lists as a tuple (or list), as Parallel takes both inputs as the first argument.