Proposed Naming Change

For the function below, consider loss instead of cost to express that the accumulation of costs in each iteration goes towards the loss.

# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: model

def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):

...
    # Optimization loop
    for t in range(num_iterations): # Loop over the number of iterations
        cost = 0 # Is `loss` is a better name?

    ...
            # Add the cost using the i'th training label's one hot representation and "A" (the output of the softmax)
            cost += ... # Is `loss` is a better name?
            ### END CODE HERE ###
...
    return pred, W, b

@tudor38

in iterative training, the optimizer aims to reduce the overall “cost” by evaluating individual losses and updating parameters. in the iterative training, the model calculates the loss for a prediction, computes the cost (overall error), and updates parameters to minimize this cost. So choice of cost termed here by developer is by far a right term as per my understanding.

1 Like

Yes, the way Professor Ng uses those terms is that loss means the values of the loss function on the individual inputs in the batch, so loss is typically a vector quantity with one element per input. Then the cost is a scalar value that is the average of the loss values over the batch. Gradient descent minimizes the cost.

But you will see cases in which people may be a bit less precise and use the terms loss and cost essentially interchangeably.

2 Likes

@Deepti_Prasad and @paulinpaloalto, thank you both for the clarification.

1 Like