it can get updated as the both input have same weight and same structure.
Do not get confused between concatenating and having two separate branches. One really needs to understand some basic concept about Siamese network is the two input having some similar characteristics. The pictures posted by @Wendy mentions that there are two different neural architecture but having same structure and weights and that is why a common base network is made, and then the same base network is applied separately to both neural architecture to get the two outputs for comparison of similarity.
Probably you are getting confused if multiple inputs giving multiple outputs with Siamese network, then that is a totally different concept where there are different weights, where a weight matrix is calculated first and then vectorised and model is trained. I am sharing a post link for this also. check this Backpropagation in multi-input/multi-output neural networks - #4 by Deepti_Prasad
Probably reading that post will clarify the difference between Siamese network and multiple input network.
Regards
DP