Passing intermediate layers output or a dynamic threshold to the loss function?

Hello,
I have just started course one and finished week two on custom loss function.
Can I get some insight about the implementations below?

I want to have a threshold in the custom loss function that is dynamic. For example lets say I want this threshold to increase or decrease with the number of epoch? Is it possible?

One other option that I would like to have is to pass an intermediate layer output to the loss function. Is it possible?

Thank so much for sharing your thoughts and ideas
Sara

Hey @Sara_Shamekh,
Welcome to the community.

I am little unsure as to whether you are referring to Week 3 or Week 2, since in Week 2, we discussed upto linear regression, and in that, there is no such threshold used in the computation of cost. Now, in week 3, we have discussed the logistic regression model, and I am assuming that here you are referring to use of sigmoid as an alternative for a 0.5 threshold. Nonetheless, dynamic threshold is something which can be easily arranged, by passing an additional parameter indicating the threshold to the compute_cost function, which you can easily adjust based on the number of epochs in the gradient_descent function. I hope this resolves your first query.

This can be easily arranged as well. Here, I am assuming you refer to neural networks. When you train a neural network using PyTorch or Tensorflow, they allow you to extract the outputs of intermediate layers, which you can easily pass to the loss function, and compute loss based on that. In fact, this concept is used in one of the very popular GANs, of which I couldn’t remember the name as of now. I hope this resolves your second query.

Cheers,
Elemento

Hello @Sara_Shamekh

If you pass an intermediate layer output instead of the final layer ouput to calculate the loss, what would be the target value that you will pass to the loss function to calculate the error?

Hey @shanup,
In GANs, it was usually the representation of the fake images of the same layer for the corresponding real images, in some form of distribution. For instance, the representations of the fake images and the representations of the real images from some common layers were pushed to have the same distribution.

If we think about a standard neural network solving a standard classification or regression task, then I guess the same concept may be borrowed from GANs, though it’s just a hunch. For instance, pushing the feature representations from some certain layers to say Gaussian Distribution could help in some applications, though I am not sure in which. But off-course, I guess this is more of an open research area, so let’s see if we see some work along these lines in the future.

Cheers,
Elemento

Hey @Elemento

Wasn’t really sure that this could be done for a Neural network solving a standard classification or regression task…hence the question. But thanks for the example on GAN’s.