Transposed Convolution: keras vs manual computation


I doubt i’ll be able to post any code, so i’ll simply explain without code or numbers what i did and what kind of result i’ve got.

In the Quiz from week 3, we’re asked to manually compute a transposed convolution.
I passed the Quiz already, however i wondered if i could do the transposed convolution with keras using a Conv2DTranspose layer.

I managed to get a working code, with one Input layer and the conv2dtranspose.
The summary() indicates that i’ve got the configuration for filters, stride and padding correct.
Also, i used the Constant initializer and passed a matrix of weights containing the filter matrix’s values.

The matrix was created by shoving all values (row then columns) in a single array, then reshaping:

np.array([v11, v12, v13, v21, v22..., vnn]).reshape(f, f)

The same goes for the input, but with a different shape (following exactly the question in the Quizz).

However when i ask the “network” to predict(), the result i have are not what the quiz expects: i have the correct shape, but the values don’t match at all.

I noticed that the shape of the weights is (f, f, 1, 1), which looks unexpected to me.

Can i receive any insight about that?


Hi Leryan,

Did you resolve this issue? If not, please send me your code through a personal message so I can have a look.

Found why: the course and Keras don’t use the same “implementation”, and by observing keras result and trying by hand, you can get a rough idea of how the “same” padding is implemented and why it produces a different result than what the quizz expects.