W1A2 - happyModel() missing input shape?

I’m getting errors with the test of happyModel(), in particular:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-69-f33284fd82fe> in <module>
      1 happy_model = happyModel()
      2 # Print a summary for each layer
----> 3 for layer in summary(happy_model):
      4     print(layer)
      5 

~/work/release/W1A2/test_utils.py in summary(model)
     34     result = []
     35     for layer in model.layers:
---> 36         descriptors = [layer.__class__.__name__, layer.output_shape, layer.count_params()]
     37         if (type(layer) == Conv2D):
     38             descriptors.append(layer.padding)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in output_shape(self)
   2177     """
   2178     if not self._inbound_nodes:
-> 2179       raise AttributeError('The layer has never been called '
   2180                            'and thus has no defined output shape.')
   2181     all_output_shapes = set(

AttributeError: The layer has never been called and thus has no defined output shape.

Other issues pointed to layers without full definition, such as the ZeroPad2D layer lacking an input shape. TF seems to have no documentation or requirement related to input shape.

How does one debug an issue with the layers not being properly defined?

The base code for happyModel() is below, but other questions appear to have an input_shape defined prior to the model= line, is something missing here?

# GRADED FUNCTION: happyModel

def happyModel():
    """
    Implements the forward propagation for the binary classification model:
    ZEROPAD2D -> CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> FLATTEN -> DENSE
    
    Note that for simplicity and grading purposes, you'll hard-code all the values
    such as the stride and kernel (filter) sizes. 
    Normally, functions should take these values as function parameters.
    
    Arguments:
    None

    Returns:
    model -- TF Keras model (object containing the information for the entire training process) 
    """
    model = tf.keras.Sequential([
            ## ZeroPadding2D with padding 3, input shape of 64 x 64 x 3
          
            ## Conv2D with 32 7x7 filters and stride of 1
            
            ## BatchNormalization for axis 3
            
            ## ReLU
            
            ## Max Pooling 2D with default parameters
            
            ## Flatten layer
            
            ## Dense layer with 1 unit for output & 'sigmoid' activation
            
            # YOUR CODE STARTS HERE
            
            
            # YOUR CODE ENDS HERE
        ])
    
    return model

Yeah input_shape was entirely the issue. I defined input_shape=(64, 64, 3) for the ZeroPad2D and this just started working. I’d probably suggest highlighting the last paragraph that indicates the input shape must be provided, and clarify that it must be provided to the first layer (and not to each layer in the sequence).

1 Like

Glad you fixed it.
Thanks for the suggestion.

For any layer other than the first, the input shape is defined by the output shape of the previous layer, right? Note that at the point we define the model, we are not actually feeding it data, just defining it. So we have to tell the first layer what the shape of the input will be. But after that, it is determined by the parameters to the previous layers.

I had same issue: input_shape=(64, 64, 3)