Hello, While computing cost using tensorflow we are using tf.reduce_mean(tf.keras.losses.binary_crossentropy(y_true = …, y_pred = …, from_logits=True)). And it is mentioned that this is equivalent to mean_reduce(max(logits, 0) - logits * labels + log(1 + exp(-abs(logits))), axis=-1). But in notes we have learning that binary cross entropy is sum(y_actual *log(y_pred) + (1-y_actual) * log(1-y_pred) So why is this different cross entropy used and what is significance of this new cost function

1 Like