Course 4, Week 2, Assignment 2 issues

I am confused about Exercise 2 - alpaca_model

Are we suppose to fill in all the lines of code between “START” and “FINISH”? or just do the 1. 2. 3. mentioned in the instructions in part 3.2? I am not sure how to go about this? Are we supposed to read the Keras documentation and figure it out? Please help.

### START CODE HERE
    
    base_model_path="imagenet_base_model/without_top_mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_160_no_top.h5"
    
    base_model ...
    
    # freeze the base model by making it non trainable
    base_model.trainable = ...

    # create the input layer (Same as the imageNetv2 input size)
    inputs = tf.keras.Input(shape=... 
    
    # apply data augmentation to the inputs
    x = data_augmentation(...
    
    # data preprocessing using the same weights the model was trained on
    x = preprocess_input(...
    
    # set training to False to avoid keeping track of statistics in the batch norm layer
    x = base_model(...
    
    # add the new Binary classification layers
    # use global avg pooling to summarize the info in each channel
    x = ...()(x) 
    # include dropout with probability of 0.2 to avoid overfitting
    x = ...(x)
        
    # use a prediction layer with one neuron (as a binary classifier only needs one)
    outputs = ...
    
    ### END CODE HERE

Generally you don’t need the Keras documentation (and it’s pretty confusing in any case). The lab instructions should be sufficient.

Your task is to write code fragments that replace the NONE text, between the START CODE HERE and END CODE HERE tags.

It’s no different than what you’ve done in this specialization previously.

Tom is right that the instructions should be sufficient, if you have the patience to read them carefully. But it wouldn’t hurt to also make sure you are clear on how the Keras Sequential and Functional APIs work for defining layers. The Keras documentation may not be the most time efficient way to do that, but here’s a thread on the forums that gives a very nice explanation and doesn’t take all day to read.