Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning Course DeepLearning.AI Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning - Week 3 Lab 1

The code up to this line is exactly the same from the notebook:

import matplotlib.pyplot as plt

import tensorflow as tf

from tensorflow.keras import models

Load the Fashion MNIST dataset

fmnist = tf.keras.datasets.fashion_mnist

(training_images, training_labels), (test_images, test_labels) = fmnist.load_data()

Normalize the pixel values

training_images = training_images / 255.0

test_images = test_images / 255.0

Define the model

model = tf.keras.models.Sequential([

Add convolutions and max pooling

tf.keras.layers.Conv2D(32, (3,3), activation=‘relu’, input_shape=(28, 28, 1)),

tf.keras.layers.MaxPooling2D(2, 2),

tf.keras.layers.Conv2D(32, (3,3), activation=‘relu’),

tf.keras.layers.MaxPooling2D(2,2),

Add the same layers as before

tf.keras.layers.Flatten(),

tf.keras.layers.Dense(128, activation=‘relu’),

tf.keras.layers.Dense(10, activation=‘softmax’)

])

Print the model summary

model.summary()

Use same settings

model.compile(optimizer=‘adam’, loss=‘sparse_categorical_crossentropy’, metrics=[‘accuracy’])

Train the model

print(f’\nMODEL TRAINING:')

model.fit(training_images, training_labels, epochs=5)

Evaluate on the test set

print(f’\nMODEL EVALUATION:')

test_loss = model.evaluate(test_images, test_labels)

print(test_labels[:100])

model(tf.keras.Input((28, 28, 1)))

depth = model.layers[0].output.shape

f, axarr = plt.subplots(3,4)

FIRST_IMAGE=0

SECOND_IMAGE=23

THIRD_IMAGE=28

CONVOLUTION_NUMBER = 1

layer_outputs = [layer.output for layer in model.layers]

activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)

1 Like

Hi @cesarcx

You need to ensure that the model has been called with some input data before you try to access the model’s layers’ outputs.

You can insert a dummy call to the model right after you define and compile it:

model(tf.keras.Input((28, 28, 1)))

Hope this helps! Feel free to ask if you need further assistance.

1 Like

Dear @Alireza_Saei , thanks for your timely response!. I had tried this, by uncommentating line # model(tf.keras.Input((28, 28, 1))). But it rises another error.

This error occurs later while running the for loop, here the code:

for x in range(0,4):
f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(28, 28, 1))
axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap=‘inferno’)
axarr[0,x].grid(False)

f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(28, 28, 1))
axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap=‘inferno’)
axarr[1,x].grid(False)

f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(28, 28, 1))
axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap=‘inferno’)
axarr[2,x].grid(False)

Error L58 : f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(28, 28, 1))

File “C:\Users\cesar\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\function.py”, line 163, in _run_through_graph
output_tensors.append(tensor_dict[id(x)])

KeyError: ‘Exception encountered when calling Functional.call().\n\n\x1b[1m2452239277584\x1b[0m\n\nArguments received by Functional.call():\n • inputs=tf.Tensor(shape=(28, 28, 1), dtype=float32)\n • training=False\n • mask=None’

I have also tried ‘… .reshape(1, 28, 28, 1))’ when calling the activation.predict() but the error persists, unfortunately…

Hey,

The input tensor to the model must have a batch dimension. This means reshaping the image to (1, 28, 28, 1). Also, make sure the prediction loop is properly structured to handle the reshaped input and initialize the activation_model correctly.

1 Like

Hi @Alireza_Saei ! I’ve been working around with this for a while without success… My shame.

I had also reshape the images sets:

Reshaping the images

training_images = training_images.reshape(60000, 28, 28, 1)

test_images = test_images.reshape(10000, 28, 28, 1)

My model was:

Define the model

model = tf.keras.models.Sequential([

Adding the shape of the input

tf.keras.Input((1, 28, 28, 1)),

Add convolutions and max pooling

tf.keras.layers.Conv2D(32, (3,3), activation=‘relu’),
tf.keras.layers.MaxPooling2D(2, 2),

But it rises the error:
ValueError: Kernel shape must have the same length as input, but received kernel of shape (3, 3, 1, 32) and input of shape (None, 1, 28, 28, 1).

When I change the the line tf.keras.Input((1, 28, 28, 1)) to tf.keras.Input((28, 28, 1)) the model.summary() is:

Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ conv2d (Conv2D) │ (None, 26, 26, 32) │ 320 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ max_pooling2d (MaxPooling2D) │ (None, 13, 13, 32) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_1 (Conv2D) │ (None, 11, 11, 32) │ 9,248 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ max_pooling2d_1 (MaxPooling2D) │ (None, 5, 5, 32) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ flatten (Flatten) │ (None, 800) │ 0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense (Dense) │ (None, 128) │ 102,528 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_1 (Dense) │ (None, 10) │ 1,290 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 113,386 (442.91 KB)
Trainable params: 113,386 (442.91 KB)
Non-trainable params: 0 (0.00 B)

At line 64, as you sad I coded:

model(tf.keras.Input((1, 28, 28, 1)))

But this rises the following error at this line:
ValueError: Exception encountered when calling Sequential.call().

Input 0 of layer “functional” is incompatible with the layer: expected shape=(None, 28, 28, 1), found shape=(None, 1, 28, 28)

Arguments received by Sequential.call():
• args=(‘<KerasTensor shape=(None, 1, 28, 28, 1), dtype=float32, sparse=None, name=keras_tensor_8>’,)
• kwargs={‘mask’: ‘None’}

When I comment this line it comes to an error at:
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
^^^^^^^^^^^
File “C:\Users\cesar\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py”, line 228, in input
return self._get_node_attribute_at_index(0, “input_tensors”, “input”)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

ValueError: The layer sequential has never been called and thus has no defined input.

Ok, next I decided to adjust the call to:

model(tf.keras.Input((28, 28, 1)))

It pass through the activation.model(). My code follows:

for x in range(0,4):
my_image=test_images[FIRST_IMAGE]
print(f’\nmy_image.shape: {my_image.shape}')
f1 = activation_model.predict(my_image.reshape(1, 28, 28, 1))

At terminal it outputs:
my_image.shape: (28, 28, 1)

f1 = activation_model.predict(my_image.reshape(1, 28, 28, 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
output_tensors.append(tensor_dict[id(x)])
~~~~~~~~~~~^^^^^^^
KeyError: ‘Exception encountered when calling Functional.call().\n\n\x1b[1m2619439529424\x1b[0m\n\nArguments received by Functional.call():\n • inputs=tf.Tensor(shape=(1, 28, 28, 1), dtype=float32)\n • training=False\n • mask=None’

No matter I change my my_image.reshape(1, 28, 28, 1)) to my_image.reshape(28, 28, 1)) as a parameter the error persists the same.

By the way, I’m using Python version: 2.16.1 and Tensorflow version: 2.16.1

The complete code is:

import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import models
# import numpy as np


# Printing the versions of python and tensorflow:
print(f'Python version: {tf.__version__}')
print(f'Tensorflow version: {tf.__version__}')


# Load the Fashion MNIST dataset
fmnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = fmnist.load_data()

# Reshaping the images
training_images = training_images.reshape(60000, 28, 28, 1)
test_images = test_images.reshape(10000, 28, 28, 1)

# Normalize the pixel values
training_images = training_images / 255.0
test_images = test_images / 255.0

# Training images shape
print(f'Training images shape: {training_images.shape}')


# Define the model
model = tf.keras.models.Sequential([

  # Adding the shape of the input
  tf.keras.Input((28, 28, 1)),

  # Add convolutions and max pooling
  tf.keras.layers.Conv2D(32, (3,3), activation='relu'),#, input_shape=(28, 28, 1)),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
  tf.keras.layers.MaxPooling2D(2,2),

  # Add the same layers as before
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

# Print the model summary
model.summary()

# Use same settings
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train the model
print(f'\nMODEL TRAINING:')
model.fit(training_images, training_labels, epochs=1)

# Evaluate on the test set
print(f'\nMODEL EVALUATION:')
test_loss = model.evaluate(test_images, test_labels)

print(test_labels[:100])

model(tf.keras.Input((28, 28, 1)))
# depth = model.layers[0].output.shape

f, axarr = plt.subplots(3,4)

FIRST_IMAGE = 0
SECOND_IMAGE = 23
THIRD_IMAGE = 28
CONVOLUTION_NUMBER = 1

layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)

for x in range(0,4):
  my_image=test_images[FIRST_IMAGE]
  print(f'\nmy_image.shape: {my_image.shape}')
  f1 = activation_model.predict(my_image.reshape(28, 28, 1))[x]
  axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[0,x].grid(False)

  f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
  axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[1,x].grid(False)

  f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
  axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[2,x].grid(False)

Hi @cesarcx

It’s completely fine to get stuck in a code and trying to figure out what the problem is. Try using InputLayer instead of Input this time:

tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),

Because, this is the layer that can be added to a Sequential model to specify the input shape explicitly. However, Input allows you to create an input tensor that can be used to build a model using the functional API.


and also try switching your code back to .reshape(1, 28, 28, 1) in prediction loop.

Hope this helps! Feel free to ask if you need further assistance.

1 Like

Use this:

f, axarr = plt.subplots(3,4)

FIRST_IMAGE = 0
SECOND_IMAGE = 23
THIRD_IMAGE = 28
CONVOLUTION_NUMBER = 1

layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)

for x in range(0,4):
  my_image=test_images[FIRST_IMAGE]
  f1 = activation_model.predict(my_image.reshape((1, 28, 28, 1)), verbose=0)[x]
  axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[0,x].grid(False)

  f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape((1, 28, 28, 1)), verbose=0)[x]
  axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[1,x].grid(False)

  f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape((1, 28, 28, 1)), verbose=0)[x]
  axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[2,x].grid(False)
1 Like

Hi @Alireza_Saei ! Thanks for your patience!. But nope… Did not work…

Follow the error:

my_image.shape: (28, 28, 1)
Traceback (most recent call last):
File “c:\Users\cesar\Documents\Pessoal\Python\Deep Learnig\Pessoal\Coursera\week_3_prg_exec1 copy.py”, line 78, in
f1 = activation_model.predict(my_image.reshape(1, 28, 28, 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\cesar\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\utils\traceback_utils.py”, line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File “C:\Users\cesar\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\function.py”, line 163, in _run_through_graph
output_tensors.append(tensor_dict[id(x)])
~~~~~~~~~~~^^^^^^^
KeyError: ‘Exception encountered when calling Functional.call().\n\n\x1b[1m2264831475152\x1b[0m\n\nArguments received by Functional.call():\n • inputs=tf.Tensor(shape=(1, 28, 28, 1), dtype=float32)\n • training=False\n • mask=None’

Here the code with the changes I’ve done:

import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import models
# import numpy as np


# Printing the versions of python and tensorflow:
print(f'Python version: {tf.__version__}')
print(f'Tensorflow version: {tf.__version__}')


# Load the Fashion MNIST dataset
fmnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = fmnist.load_data()

# Reshaping the images
training_images = training_images.reshape(60000, 28, 28, 1)
test_images = test_images.reshape(10000, 28, 28, 1)

# Normalize the pixel values
training_images = training_images / 255.0
test_images = test_images / 255.0

# Training images shape
print(f'Training images shape: {training_images.shape}')


# Define the model
model = tf.keras.models.Sequential([

  # Adding the shape of the layers input
  tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),

  # Add convolutions and max pooling
  tf.keras.layers.Conv2D(32, (3,3), activation='relu'),#, input_shape=(28, 28, 1)),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
  tf.keras.layers.MaxPooling2D(2,2),

  # Add the same layers as before
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

# Print the model summary
model.summary()

# Use same settings
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train the model
print(f'\nMODEL TRAINING:')
model.fit(training_images, training_labels, epochs=1)

# Evaluate on the test set
print(f'\nMODEL EVALUATION:')
test_loss = model.evaluate(test_images, test_labels)

print(test_labels[:100])

model(tf.keras.Input((28, 28, 1)))
# depth = model.layers[0].output.shape

f, axarr = plt.subplots(3,4)

FIRST_IMAGE = 0
SECOND_IMAGE = 23
THIRD_IMAGE = 28
CONVOLUTION_NUMBER = 1

layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)

for x in range(0,4):
  my_image=test_images[FIRST_IMAGE]
  print(f'\nmy_image.shape: {my_image.shape}')
  f1 = activation_model.predict(my_image.reshape(1, 28, 28, 1))[x]
  axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[0,x].grid(False)

  f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
  axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[1,x].grid(False)

  f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
  axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[2,x].grid(False)

Hi @balaji.ambresh !

Thanks for your reply, but nope…

Here the error message:
Traceback (most recent call last):
File “c:\Users\cesar\Documents\Pessoal\Python\Deep Learnig\Pessoal\Coursera\week_3_prg_exec1 copy.py”, line 77, in
f1 = activation_model.predict(my_image.reshape((1, 28, 28, 1)), verbose=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\cesar\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\utils\traceback_utils.py”, line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File “C:\Users\cesar\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\function.py”, line 163, in _run_through_graph
output_tensors.append(tensor_dict[id(x)])
~~~~~~~~~~~^^^^^^^
KeyError: ‘Exception encountered when calling Functional.call().\n\n\x1b[1m2280096688336\x1b[0m\n\nArguments received by Functional.call():\n • inputs=tf.Tensor(shape=(1, 28, 28, 1), dtype=float32)\n • training=False\n • mask=None’

The code:

import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import models
# import numpy as np


# Printing the versions of python and tensorflow:
print(f'Python version: {tf.__version__}')
print(f'Tensorflow version: {tf.__version__}')


# Load the Fashion MNIST dataset
fmnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = fmnist.load_data()

# Reshaping the images
training_images = training_images.reshape(60000, 28, 28, 1)
test_images = test_images.reshape(10000, 28, 28, 1)

# Normalize the pixel values
training_images = training_images / 255.0
test_images = test_images / 255.0

# Training images shape
print(f'Training images shape: {training_images.shape}')


# Define the model
model = tf.keras.models.Sequential([

  # Adding the shape of the layers input
  tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),

  # Add convolutions and max pooling
  tf.keras.layers.Conv2D(32, (3,3), activation='relu'),#, input_shape=(28, 28, 1)),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
  tf.keras.layers.MaxPooling2D(2,2),

  # Add the same layers as before
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

# Print the model summary
model.summary()

# Use same settings
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train the model
print(f'\nMODEL TRAINING:')
model.fit(training_images, training_labels, epochs=1)

# Evaluate on the test set
print(f'\nMODEL EVALUATION:')
test_loss = model.evaluate(test_images, test_labels)

print(test_labels[:100])

model(tf.keras.Input((28, 28, 1)))
# depth = model.layers[0].output.shape

f, axarr = plt.subplots(3,4)

FIRST_IMAGE = 0
SECOND_IMAGE = 23
THIRD_IMAGE = 28
CONVOLUTION_NUMBER = 1

layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)

for x in range(0,4):
  my_image=test_images[FIRST_IMAGE]
  f1 = activation_model.predict(my_image.reshape((1, 28, 28, 1)), verbose=0)[x]
  axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[0,x].grid(False)

  f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape((1, 28, 28, 1)), verbose=0)[x]
  axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[1,x].grid(False)

  f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape((1, 28, 28, 1)), verbose=0)[x]
  axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
  axarr[2,x].grid(False)
  

hi @cesarcx

can you share a direct link to your notebook? Make sure you send the link after running the codes.

also confirm if you have done any changes in the codes from the assignment or changed the image or downloaded any other images.

Regards
DP

1 Like

Hi again,

Now that you have defined the InputLayer in your model there is no need for this part of code: model(tf.keras.Input((28, 28, 1))) outside of the model definition.

1 Like

Sorry I didn’t notice tensorflow version you were using. Here’s the full notebook for your reference using tf 2.16.1:

learner.ipynb (49.2 KB)

2 Likes

Dear @Alireza_Saei , I comment this line. Follow the error:

Traceback (most recent call last):
File “c:\Users\cesar\Documents\Pessoal\Python\Deep Learnig\Pessoal\Coursera\week_3_prg_exec1 copy.py”, line 73, in
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
^^^^^^^^^^^
File “C:\Users\cesar\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py”, line 228, in input
return self._get_node_attribute_at_index(0, “input_tensors”, “input”)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\cesar\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py”, line 259, in _get_node_attribute_at_index
raise ValueError(
ValueError: The layer sequential has never been called and thus has no defined input.

Dear @balaji.ambresh , thahks!!! Finally worked!

There was also a typo error I observed in my code you pointed out correctly at: activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)

It should be: activation_model = tf.keras.models.Model(inputs = model.inputs, outputs = layer_outputs)

Weird the interpreter runs it and did not point out an error…

Anyway, thank you very much!!!

1 Like

Dear @Alireza_Saei , hi!

Indeed I had to keep that line to properly work. The problem was due to a typo in my code at: activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)

It should be: activation_model = tf.keras.models.Model(inputs = model.inputs, outputs = layer_outputs)

1 Like

You’re welcome. 2.16.1 isn’t officially supported by the course. If the notebook asks you to run things on google colab, please do so or setup your local environment using colab as reference as shown below:

image

For graded assignments, use the above command as reference on coursera jupyter environment and setup your environment accordingly.

2 Likes

Oh, good to hear that your problem has been solved! Good luck :raised_hands:

2 Likes