Questions - Deep Learning

  1. Let’s say we have two neural networks. The weight parameters of one neural network are already converged (Neural Network 1) while the other (Neural Network 2) is untrained. Now we are using these neural networks and the final output is the average of the outputs from these two neural networks. The output value is in the range of 0 to 1.
    a) Is there will be a change in the weights of neural Network 1 while training the model with L1 loss?
    b) Is there will be a change in the weights of neural Network 1 while training the model with L2 loss?

  2. Does adding additional terms like Relu(-x) to the loss function help in overcoming the deadly RELU case?