Wee1 about the multi output loss?

62/62 [==============================] - 1s 6ms/step - loss: 275.5240 - y1_output_loss: 124.0530 - y2_output_loss: 151.4709 - y1_output_root_mean_squared_error: 11.1379 - y2_output_root_mean_squared_error: 12.3074 - val_loss: 30.0205 - val_y1_output_loss: 16.0726 - val_y2_output_loss: 13.9479 - val_y1_output_root_mean_squared_error: 4.0091 - val_y2_output_root_mean_squared_error: 3.7347

I see in the training log, the total loss is sum of y1_output_loss and y2_output_loss.

  1. If i want the total loss have more weights on y1_output_loss ?
  2. Also I want to know how the optimizer works when we have multiple outputs? Is it calculate the gradient based on the sum loss and after update the weights and bias.
  3. if the training is done, are we getting the best model parameters on task1(y1 prediction) and task2(y2 prediction) ? Because the two tasks share some common parameters in the upper layers , y2 prediction task have his own parameters to train.

The first 2 I think the post on Gradients of Multi output models probably give some insight. The third I think probably not, if you train them separately you might get better performance, I think thats logical.