Hi,

Some values are greater than 1 after the broadcasting:

box_scores = box_confidence*box_class_probs

how can it be? shouldn’t all values be smaller than 1?

Hi,

Some values are greater than 1 after the broadcasting:

box_scores = box_confidence*box_class_probs

how can it be? shouldn’t all values be smaller than 1?

1 Like

At that part of the exercise are you working with real predictions (confidence) or randomly generated numbers?

Random generated numbers.

Take a close look at the distribution parameters. That answers the literal question about the values…they are the result of that call to the generator. It doesn’t explain why the course developers chose to use those parameters, however. Because your observation that the scores should be probabilities and thus 0 <= score <= 1 for real data is correct. The most you can take away from the random generated values is that the shapes are correct.

Interesting. Here’s the code in the test cell that generates the input values:

```
tf.random.set_seed(10)
box_confidence = tf.random.normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random.normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random.normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
```

It’s just a test case and nothing in your logic should depend on the fact that the confidence and class probabilities are between 0 and 1, but it is kind of zany that they chose a Gaussian distribution with \mu = 1 and \sigma = 4 for those inputs. One can only conjecture that they weren’t thinking too hard about the meaning of those variables at the time … And if you’re already setting the seed on each of those *tf.random.normal* calls, it doesn’t seem like it adds anything to first set the seed. Hmmmm.

*Zany* is one word for it. Maybe not exactly the one I was using privately, but, ok

The test cases are an area of continuous improvement.