C4W2A2E2: Assertion Error

Hello there,

I’ve got the following AssertionError, yet can’t really figure out, why I am lacking a layer.

AssertionError                            Traceback (most recent call last)
<ipython-input-83-0346cb4bf847> in <module>
     10                     ['Dense', (None, 1), 1281, 'linear']] #linear is the default activation
     11 
---> 12 comparator(summary(model2), alpaca_summary)
     13 
     14 for layer in summary(model2):

~/work/W2A2/test_utils.py in comparator(learner, instructor)
     14 def comparator(learner, instructor):
     15     if len(learner) != len(instructor):
---> 16         raise AssertionError(f"The number of layers in the model is incorrect. Expected: {len(instructor)} Found: {len(learner)}")
     17     for a, b in zip(learner, instructor):
     18         if tuple(a) != tuple(b):

AssertionError: The number of layers in the model is incorrect. Expected: 8 Found: 7

I’ve followed the instruction of the layer to

    # freeze the base model by making it non trainable

    # create the input layer (Same as the imageNetv2 input size)
    
    # apply data augmentation to the inputs

    # data preprocessing using the same weights the model was trained on

    # set training to False to avoid keeping track of statistics in the batch norm layer

    # add the new Binary classification layers
    # use global avg pooling to summarize the info in each channel

    # include dropout with probability of 0.2 to avoid overfitting

    # use a prediction layer with one neuron (as a binary classifier only needs one)

Anybody got a hint or an idea what is wrong?

Thank you. :slight_smile:

What did you use for the final prediction layer?

Hi, I have the same problem, in my case I used a dense layer with a single neuron and the ‘sigmoid’ activation function :s

Hey TMosh,

thanks for engaging!
I used a Dense-Layer, with a linear activation and one neuron.

Well, they print out what the layers are supposed to look like. There is logic to print out your actual layers, but that is after the “comparator” that is “throwing” in your case. You could add another cell after the test cell that throws and then run that same code to print your layers:

for layer in summary(model2):
    print(layer)

Once you see that, it should be pretty obvious what the difference is.

Update: Actually that test cell is editable, so you could also just switch the order of the comparator logic with the print logic.

Hey Paul,

thanks for giving great advice - again and again :slight_smile:
Thanks to your guidance, I found out that my Sequential Layer is missing.

I found this post of yours

explaining how the data_augmenter()-function works, yet I’m lacking any understanding on the topic.

In the step where data_augmenter() is called, it is not passed any argument, returning tf.keras.Sequential If I try passing it the inputs, the TypeError

TypeError: data_augmenter() takes 0 positional arguments but 1 was given

Which is per definition correct - yet how I understand it, the function should take the inputs Tensor as an argument?

EDIT: Ok, I solved it, yet really don’t understand why. Paul, I would be glad if you would answer my private message I’ll write you.

Look at the function signature of alpaca_model. The “named parameter” being passed is called data_augmentation. And the default value of that parameter if none is actually passed is the function data_augmenter(). So data_augmentation is a “function reference” or just a function that is local to alpaca_model. You can call it. Your mistake is that you are calling data_augmenter directly: that is a global variable and may or may not be the actual augmentation function that gets passed in to alpaca_model, right?

If you call the right function, you will be able to pass it one tensor as an argument and it will return you one tensor as the answer.

1 Like

Thank you for the quick response. I have to admit, I was not aware what a function reference is/how it works.

Actually maybe I should give a little more detail. Take a more careful look at the actual definition of the function data_augmenter(). It is defined to take no arguments, but it constructs and returns you a Keras Sequential object. That is a function that you can then call with an input.

So when they say

data_augmentation = data_augmenter()

in the parameter list, data_augmentation is then the Sequential object that was returned. That is a function which takes one tensor as an argument and returns one tensor as its output.

So data_augmenter is a function that returns you a function. Just like all the Layer functions in TF/Keras (e.g. tfl.Conv2D).

1 Like

I am also stuck here, I understand I need to create a Sequential layer within the Functional object. However, if the Sequential object takes a tensor and returns a tensor wouldn’t it just edit the Input tensor without adding any additional layers to the Functional object? If I add the Sequential object like any other layer it does not appear in the Functional object, it seems like there is something unique I need to do when adding Sequential objects as layers.

I’m not sure I understand the question. If you are talking about alpaca_model, there you are using the Functional API. You simply invoke the data augmentation function that you are passed as an input parameter. It takes an input tensor and gives you an output tensor.

Or are you asking about how to implement the data_augmenter itself?

My bad, I should have provided more details. Yes, I am looking at alpaca_model. The grader requires a Sequential layer after the input layer and I am unsure how to meet that requirement.

I understand that data_augmentation is set to the data_augmenter() function which returns a Sequential object, that object can be passed a tensor and a tensor will be returned. If the input tensor is just being edited then it makes sense that a Sequential layer will not be added to the model, do you know how I can add the Sequential layer to the model? Is there something I need to do on top of passing the input tensor to the data_augmentation parameter?

For reference this is the required architecture:

[['InputLayer', [(None, 160, 160, 3)], 0],
['Sequential', (None, 160, 160, 3), 0],  # this is what I am missing
['TensorFlowOpLayer', [(None, 160, 160, 3)], 0],
['TensorFlowOpLayer', [(None, 160, 160, 3)], 0],
['Functional', (None, None, None, 1280), 2257984],
['GlobalAveragePooling2D', (None, 1280), 0],
['Dropout', (None, 1280), 0, 0.2],
['Dense', (None, 1), 1281, 'linear']]

There is nothing additional you need to do. data_augmentation is a function (which happens to be a Sequential Object), which takes an input and provides an output. You just invoke that function with the appropriate inputs and outputs. Note that it is a mistake to reference data_augmenter from the scope of the alpaca_model function.

Gotcha, I made sure I was referencing data_augmentation and not data_augmenter so the argument can be properly passed.

I am still not sure what is happening. With the Functional API, I believe adding a layer involves passing tensors to layer objects. Once the tensor has flowed through every layer (I couldn’t help myself) the keras.Model() is used on the final tensor to create the model. It seems like the tensor keeps track of the layers it has gone through, so when using the summary() method on the model it can display the layers. I did more digging and it looks like the Sequential and Layer objects act the same way: tensor in, tensor out.

Do you know if there is any condition that would break this functionality, which could explain why I am not seeing the Sequential layer appear in the model summary even though I am running the input through data_augmentation?

No, I don’t know what would account for the behavior you are seeing. If the discussion here is not helping, maybe it’s time to use the “In Case of Emergency, Break Glass” method. Please check your DMs.

It turns out to be a simple bug in your code, as described in our DM conversation.

Hi, I am facing same issue.
I am calling data_augmentation() as suggested.
Still the layer shape is sequential_1 (Sequential) (None, None, 160, None) 0

It isn’t clear which “same issue” you’re referring to.
Please post a screen capture image that shows the output of the layers in your model.