# Week 2 Quiz - Activation Function

Hi, could someone give me some insights into the correct answer for this quiz question? My understanding is that this is a multi-task learning scenario, the outcome would be a vector contains a series of 0s and 1s that indicate if a road sign or a traffic light is there in the picture. What is the logic behind reasoning which activation function is the most appropriate? Thanks!

Additional background (though I’m pretty sure this and your other questions are probably covered in the lectures):

• sigmoid() gives you values between 0 and 1. This is often used for detecting either of two labels, as the values map easily to 0 (False) and 1 (True).
• softmax includes sigmoid and scales the results so that the sum of all of the outputs is 1. This is often used when there are multiple labels.

Hi TMosh, thank you so much for you reply. I understand that in this case we have multiple labels, but my doubt is that why does the sum of the probabilities of those labels has to be 1? My understanding is that it’s possible for multiple road signs and traffic signals to be in the same image. For example, if an image has both a stop sign and a traffic signal, the probabilistic outcomes foreach label should ideally be more than 50%.

Things get complicated if you are detecting multiple objects. Softmax isn’t really necessary, if you’re only trying to detect the output with the highest value (greatest probability).

I see, seems like I misinterpreted this question. Thanks!