Course 4, week 2, programming assingment : transfer learning with mobilenet

# UNQ_C2
# GRADED FUNCTION
def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()):
    ''' Define a tf.keras model for binary classification out of the MobileNetV2 model
    Arguments:
        image_shape -- Image width and height
        data_augmentation -- data augmentation function
    Returns:
    Returns:
        tf.keras.model
    '''
    
    
    input_shape = image_shape + (3,)
    
    ### START CODE HERE
    
   # mentor edit: code removed
   
    ### END CODE HERE
    
    model = tf.keras.Model(inputs, outputs)
    
    return model

in that function why we didn’t use the sigmoid activation function for the last dense layer??

Hello @mohsen_12,

Because we use loss=tf.keras.losses.BinaryCrossentropy(from_logits=True) three cells from UNQ_C2. With that setting, the sigmoid function is assumed to be inside the loss function.

If you want to know why we do in this way, Andrew explained it in this video. Even though the video is for the case of softmax, the rationale is the same for the case of sigmoid.

Cheers,
Raymond

Please do not post your code on the forum. That is not allowed by the Code of Conduct.