Strange-looking model plot for the Siamese network

The plot for Siamese network model shows that the two input layers merge into the same base_network whereas I would have expected to see two base_networks running in parallel and then getting merged at the Lambda function. After all, the latter does take two separate inputs, and yet the plot makes it seem as if it’s taking a single (?, 128) vector.

What am I misunderstanding about the way the model architecture is rendered and/or the Siamese network?

1 Like

@Amine.L, one point about Siamese networks that is explained in the first Siamese network video is that the base network has the same structure and the same weight:

In theory, each branch of the Siamese network could have its own separate copy of the base network as you suggest, but in practice, the simplest way to ensure that the weights are the same is to share that part of the model. You can see this more clearly in the second Siamese network video that discusses implementation:

Hi
I have the same query. When we say that the the base model is shared, does that mean that the inputs are concatenated before being fed to the network? How would the first hidden layer accept two inputs otherwise?

Hi Amine.L,

They make a base network and then they use the same base network to create two separate vector output which you see at the end of the model training and then those two output are checked for similar or dissimilarity based on the Euclidean distance. See the images below, you will understand by seeing the image.

I hope this clarifies your doubt!!!

Regards
DP

1 Like

Hi Deepti
Thank you for the reply
Does it mean that there are really two branches instead of just one? It confuses me because I do not understand how would the weights for the base network get updated with two inputs unless they are concatenated or there are two separate branches

it can get updated as the both input have same weight and same structure.

Do not get confused between concatenating and having two separate branches. One really needs to understand some basic concept about Siamese network is the two input having some similar characteristics. The pictures posted by @Wendy mentions that there are two different neural architecture but having same structure and weights and that is why a common base network is made, and then the same base network is applied separately to both neural architecture to get the two outputs for comparison of similarity.

Probably you are getting confused if multiple inputs giving multiple outputs with Siamese network, then that is a totally different concept where there are different weights, where a weight matrix is calculated first and then vectorised and model is trained. I am sharing a post link for this also. check this Backpropagation in multi-input/multi-output neural networks - #4 by Deepti_Prasad

Probably reading that post will clarify the difference between Siamese network and multiple input network.

Regards
DP

Hi Deepti
Sorry for the late reply
The confusion that I have is because I do not get the idea of how forward and back prop would occur with the given architecture without either concatenation or separate branching.
I’ll explain what I mean

Lets take one forward prop. According to my understanding of the given architecture, when the inputs are fed, they will be concatenated. Before the euclidean distance block, since the output vector will have vectors for both the images, it will be split in half and then euclidean distance will be calculated.

The other way would be that one image is fed, its vector is calculated and stored in memory. The second image is then fed and its vector is also stored. Then we apply the euclidean distance.

Is it one of these two or is it something different entirely is my question.

hello ainewbie,

I understand your confusion because you are mixing here Siamese network with your question of multiple input and multiple output model training. Please do not confuse your post question with this post question. it is totally two different scenario.

In siamese using a base network ( this network is created with the two input having same weight and structures), which is then used to get the two vector output to get from their respective inputs.

now comes the multiple input and multiple output, multiple inputs are passed through hidden layers by first vectoring and then multiple outputs are again vectorised to get single output. I have explained you this part in your post where you had asked about multiple input and output. You would understand this if you have taken DLS specialisation, where in it is explained in multiple inputs is vectorised into matrix to get the multiple outputs. but in case of back propagation, the vectorisation differ as the multiple output are passed through hidden layer where the input in the hidden layer is transpose to get that output.

Please have a look on the images load where it mentions how forward and backward propagation work in multiple inputs.
Notice in the backward propagation, G1 prime is the derivative of whether it was the activation function you use for the hidden layer.

Regards
DP

Hello ainewbie,

Read this thread for detail explanation of multiple inputs and multiple outputs in a neural network.

If you still have doubts, feel free to ask.

Regards
DP