AssertionError Traceback (most recent call last)
in
17 print(“\033[92mAll test passed”)
18
—> 19 compute_cost_test(compute_cost, new_y_train )
in compute_cost_test(target, Y)
13 print(result)
14 assert(type(result) == EagerTensor), “Use the TensorFlow API”
—> 15 assert (np.abs(result - (0.25361037 + 0.5566767) / 2.0) < 1e-7), “Test does not match. Did you get the mean of your cost functions?”
16
17 print(“\033[92mAll test passed”)
AssertionError: Test does not match. Did you get the mean of your cost functions?
Thanks!finally!it’s solved.
but I don’t understand" Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution."
what is a"logits tensor" and what does “probability distribution” mean here?
The Dense output has a linear activation when no activation is specified i.e. it’s wx+b. To get the output as a probability for a dense unit, the activation should be sigmoid.
Let L = logit(p) i.e. the predicted outcome.
and p = probability of output = 1
L=ln(\frac{p}{1-p}) \implies e^L = \frac{p}{1-p} , after raising both sides to power of e \implies (1 - p) * e^L = p \implies e^L - p * e^L = p \implies e^L = (e^L + 1) * p \implies p = \frac{e^L}{e^L + 1} \implies p = \frac{1}{1+\frac{1}{e^L}} , after dividing both numerator and denominator by e^L
y_pred (predicted value): This is the model’s prediction, i.e, a single floating-point value which either represents a logit, (i.e, value in [-inf, inf] when from_logits=True ) or a probability (i.e, value in [0., 1.] when from_logits=False ).
The network can output either a value in range [-inf, inf] or [0., 1.] (depending on the activation used as @balaji.ambresh correctly shows above.) The loss function needs to know which it is in order to properly interpret the forward prop outputs. from_logits is used to keep the network and the loss function in synch.