Getting the perfect inputs for the highest activation of the output layer

Hello everyone,

Let’s assume we built a neural network for figuring out if a coffee roast will be good or not based on the temperature and duration of the roasting process, like Prof. Ng did in (https://coursera.org/share/f4d7985a5e577fdcc14e0331a7516c9b) this video. In the video, our activation algorithms are logistic regression. So, the output value we get is between 0 and 1.

In the original video, we use binary prediction on the output value to decide whether the coffee roast is good or not. However, I am curious about what kind of approach we should take if we want to get the perfect temperature and duration for the perfect coffee roast based on our neural network. In other words, how to get the input values that result in the output value 1? Do we need another gradient descent over the values our first neural network produced?

What you’re describing doesn’t require an NN. You just need to do a statistical analysis on all of the examples where the output is True (1).

2 Likes

Hello, @Nil_Zeren_Dogan, you might sample many points and find the ones that give you predictions equal or very close to 1. Since those ones are going to concentrate in a region, you might focus the sampling there.

1 Like

I understand. Your and @TMosh 's approach makes a lot more sense than what I was trying to achieve. I guess I got too focused on neural networks. Thanks for the help.

1 Like