Question about the contrastive loss function?

def contrastive_loss_with_margin(margin):
    def contrastive_loss(y_true, y_pred):
        '''Contrastive loss from Hadsell-et-al.'06
        http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
        '''
        square_pred = K.square(y_pred)
        margin_square = K.square(K.maximum(margin - y_pred, 0))
        return K.mean(y_true * square_pred + (1 - y_true) * margin_square)
    return contrastive_loss

i want to kown that when we pass the y_ture and y_pred parameter to the loss function, the y_true is refered to one sample data or otherwise refer to one batch data? because i see in the contrastive loss function, the teacher put the calculation K.mean() to generate the final loss. In my mind, y_true shape is (1,) which means the similarity between the two input pictures. But i am confused with the mean() here . is y_true shape is (None, 1)? None means batch size.
Thanks!

y_true in the loss function is the batch ground truth label vector. It does not contain information about the predicted label(s). Not clear to me what two pictures means to you here.

so the y_ture contain the batch size shape ? when we come to the loss function, the y_ture and y_pred are the batch infomation but not one sample data infomation?

Correct. See for example this excerpt…

Standalone usage of losses

A loss is a callable with arguments loss_fn(y_true, y_pred, sample_weight=None) :

  • y_true : Ground truth values, of shape (batch_size, d0, ... dN) . For sparse loss functions, such as sparse categorical crossentropy, the shape should be (batch_size, d0, ... dN-1)

from this Keras doc page. Keras API reference / Losses