Running Course 4 Week 4 Face Recognition Programming Assignment notebook offline

Greetings!!
I downloaded the Face Recognition programming assignment files so I can work on it on my laptop. When I run the cells, I get the following error. Tensorflow version 2.9.1 is installed on my computer.
What other packages do I need to install to get this notebook working on my machine? Please let me know. Thank you!


ValueError Traceback (most recent call last)
/tmp/ipykernel_42061/324912946.py in
4 loaded_model_json = json_file.read()
5 json_file.close()
----> 6 model = model_from_json(loaded_model_json)
7 model.load_weights(‘keras-facenet-h5/model.h5’)

~/anaconda3/lib/python3.9/site-packages/keras/saving/model_config.py in model_from_json(json_string, custom_objects)
100 “”"
101 from keras.layers import deserialize_from_json # pylint: disable=g-import-not-at-top
→ 102 return deserialize_from_json(json_string, custom_objects=custom_objects)

~/anaconda3/lib/python3.9/site-packages/keras/layers/serialization.py in deserialize_from_json(json_string, custom_objects)
224 module_objects=LOCAL.ALL_OBJECTS,
225 custom_objects=custom_objects)
→ 226 return deserialize(config, custom_objects)

~/anaconda3/lib/python3.9/site-packages/keras/layers/serialization.py in deserialize(config, custom_objects)
203 “”"
204 populate_deserializable_objects()
→ 205 return generic_utils.deserialize_keras_object(
206 config,
207 module_objects=LOCAL.ALL_OBJECTS,

~/anaconda3/lib/python3.9/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
677
678 if ‘custom_objects’ in arg_spec.args:
→ 679 deserialized_obj = cls.from_config(
680 cls_config,
681 custom_objects=dict(

~/anaconda3/lib/python3.9/site-packages/keras/engine/training.py in from_config(cls, config, custom_objects)
2718 ]
2719 if all(key in config for key in functional_model_keys):
→ 2720 inputs, outputs, layers = functional.reconstruct_from_config(
2721 config, custom_objects)
2722 model = cls(inputs=inputs, outputs=outputs, name=config.get(‘name’))

~/anaconda3/lib/python3.9/site-packages/keras/engine/functional.py in reconstruct_from_config(config, custom_objects, created_layers)
1298 # First, we create all layers and enqueue nodes to be processed
1299 for layer_data in config[‘layers’]:
→ 1300 process_layer(layer_data)
1301 # Then we process nodes in order of layer depth.
1302 # Nodes that cannot yet be processed (if the inbound node

~/anaconda3/lib/python3.9/site-packages/keras/engine/functional.py in process_layer(layer_data)
1280 from keras.layers import deserialize as deserialize_layer # pylint: disable=g-import-not-at-top
1281
→ 1282 layer = deserialize_layer(layer_data, custom_objects=custom_objects)
1283 created_layers[layer_name] = layer
1284

~/anaconda3/lib/python3.9/site-packages/keras/layers/serialization.py in deserialize(config, custom_objects)
203 “”"
204 populate_deserializable_objects()
→ 205 return generic_utils.deserialize_keras_object(
206 config,
207 module_objects=LOCAL.ALL_OBJECTS,

~/anaconda3/lib/python3.9/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
677
678 if ‘custom_objects’ in arg_spec.args:
→ 679 deserialized_obj = cls.from_config(
680 cls_config,
681 custom_objects=dict(

~/anaconda3/lib/python3.9/site-packages/keras/layers/core/lambda_layer.py in from_config(cls, config, custom_objects)
301 def from_config(cls, config, custom_objects=None):
302 config = config.copy()
→ 303 function = cls._parse_function_from_config(config, custom_objects,
304 ‘function’, ‘module’,
305 ‘function_type’)

~/anaconda3/lib/python3.9/site-packages/keras/layers/core/lambda_layer.py in _parse_function_from_config(cls, config, custom_objects, func_attr_name, module_attr_name, func_type_attr_name)
356 elif function_type == ‘lambda’:
357 # Unsafe deserialization from bytecode
→ 358 function = generic_utils.func_load(config[func_attr_name], globs=globs)
359 elif function_type == ‘raw’:
360 function = config[func_attr_name]

~/anaconda3/lib/python3.9/site-packages/keras/utils/generic_utils.py in func_load(code, defaults, closure, globs)
791 except (UnicodeEncodeError, binascii.Error):
792 raw_code = code.encode(‘raw_unicode_escape’)
→ 793 code = marshal.loads(raw_code)
794 if globs is None:
795 globs = globals()

ValueError: bad marshal data (unknown type code)

Hello @David00! I hope you are doing well.

I have a 2.11.0 version of TensorFlow on my local computer and this assignment is running smoothly.

You can try to update your TF to this version by below code.

!pip install tensorflow==2.11.0

You can then try running the Face Recognition programming assignment again and see if the error is resolved.

Best,
Saif.

One more caveat!

Coursera uses Python version: 3.7.6 and TensorFlow version: 2.3.0. But I am using Python version: 3.9.13 and TensorFlow version: 2.11.0.While I found that the difference of TF version is not a problem, but, difference of a Python version producing error when I ran the json_file cell (2nd cell):

ValueError: bad marshal data (unknown type code)

I pasted the full error in Chat GPT 3 and it says:
This error usually occurs when the pickled object has become corrupted or has been generated by a different version of Python than the one you are using now.

I am not interested to downgrade the Python version but you can try and see if the error is resolved.

Best,
Saif.

(reply edited, applied to the wrong course).

I uploaded this notebook to google colab to check whether the issue is only with my laptop. Unfortunately, the issue occurs there as well.
So, I searched stackoverflow and found the following: python - code = marshal.loads(raw_code) ValueError: bad marshal data (unknown type code) - Stack Overflow. I am cutting and pasting the response here for easy viewing:

This typically happens when you save a model in one Python version (e.g., 3.6) and then try to load that model in another Python version (e.g., 3.9), as the binary serialization that Keras uses (marshal) is not upwards/downwards compatible. Try to install an old version of Python with an appropriate version of the Tensorflow / Keras libraries. If the model was not trained by yourself, you may ask the creators to export the trained models in a different format that doesn’t have these problems, like ONNX.

I see the following options ahead of me:

  1. See if I can get a version of the model trained using python 3.9.16 (version running on my laptop and Google colab)
  2. See if i can downgrade python versions both on colab and my laptop

Any advice will be gratefully appreciated.

You can install Python 3.6 in Colab by below code

!apt-get install python3.6

Then you need to run the below code:

!update-alternatives 
!update-alternatives

After that, you have to write 3.6 as shown below figure.

Then you need to restart the runtime the “Runtime” menu and selecting “Restart runtime”.

Hope this helps.

Best,
Saif.

Thank you for your prompt reply!
I have updated the default python3 as python3.7.16

So, that 3.7 is working for you?

sorry, no, I am having many issues as pip and dist-utils are not working now

Did you try this code? Are you using Colab?
What errors are you getting?

I opened the terminal and ran the command without the bang (!) in the beginning.
Updated alternatives to make python3.7 as the default for both the manual and auto modes.
After that, python3-pip and distutils are not working. I think I have to recreate that VM.

Maybe distutils packages are not compatible with Python 3.7 or because the update process caused some errors or conflicts in the system. I suggest you to use Colab so it won’t hurt your local computer installation.

Saif.

https://community.deeplearning.ai/search?q=bad%20marshal%20data

Sometimes search can save you time and trouble. This might be one such

My experience has been that it is far easier and safer to build virtual environments with conda that match the course environment than either migrating the course assets or modifying the entire environment on my computing platform.i have one virtual env for each specialty.

I am planning to use this FaceNet model in a demo where there will not be an internet connection. Hence I am trying to set it up locally. As the first step, I was trying to do the same using colab.
Based on the link you have shared, it seems like it is not possible to set it up locally. Can I retrain the model using newer versions of Python 3.9 and TF 2.11 etc?

Using different version will give you error like bad marshal. If you can, try to use the same versions as Coursera’s environment.
Coursera uses Python version: 3.7.6 and TensorFlow version: 2.3.0.

Best,
Saif.

thank you, @saifkhanengr! unfortunately, when I set those older environments, the newer environments are breaking. Is there a way to retrain the model from scratch using the new versions of python and tf?

Well, that’s a problem. Maybe @paulinpaloalto and @rmwkwok can help you on this.

Kindly wait, they will answer you here.

Best,
Saif.

Hello @David00,

Though my python environment might be different from yours, I had the same error message loading the assignment’s model. My way out was to (1) load the model on Coursera, (2) save it in another format (a folder that contains a couple of files), (3) zip it, (4) download it, (5) unzip it, and finally (6) load it in my environment. (Code at the end)

Step 6 can run successfully, but I am not sure if there is any issue with the loaded model, so you will need to check it yourself.

Lastly, we do not really provide support for learners working on a lab in other environments. Moreover, the lab was also only tested to work on Coursera, so it is possible that some tests that were passed on Coursera failed in your environment. There is no guarantee on the reproducibility of results across different environments.

Raymond

Step 1 - 3 (on coursera)

from tensorflow.keras.models import model_from_json

json_file = open('keras-facenet-h5/model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
model.load_weights('keras-facenet-h5/model.h5')

## Added code below. To be removed after use, or it may interfere with the grader at submission
import shutil
model.save("my_model") # save model in another format
shutil.make_archive('my_model', 'zip', 'my_model') # zip the saved model

Step 6 (in your environment)

### commented out the original code for loading the model
# from tensorflow.keras.models import model_from_json

# json_file = open('keras-facenet-h5/model.json', 'r')
# loaded_model_json = json_file.read()
# json_file.close()
# model = model_from_json(loaded_model_json)
# model.load_weights('keras-facenet-h5/model.h5')

# Added code below, for loading the model saved in a different format
model = tf.keras.models.load_model("my_model") # make sure to finish step 5 first, and then put down the path to the model.
3 Likes

Which is exactly why you create multiple virtual environments, so the two can peacefully coexist. If it is just the saved model giving trouble, @rmwkwok ‘s approach seems promising. But there are often other non-backwards compatible changes in the various packages and libraries of the ecosystem. I’ve run every program from DLS, MLS, AI for Medicine, both TensorFlow, and NLP* locally using this approach. *with the exception of the trax-based NLP exercises because I couldn’t find a trax for macos.

I also don’t understand your conclusion that running locally can’t work or resolve the bad marshall problem. One of those threads, the one with the check mark :white_check_mark: indicating solution states…

I created a new virtual environment with the same specs than yours (Python 3.7.7, Tensorflow 2.5.0, Keras 2.4.3) and it worked perfectly.