How to serve the model using TensorFlow Serving Docker image on M1 (arm64 architecture)

I am trying to do TensorFlow Serving with Docker on MacBook m1. I have followed this link machine-learning-engineering-for-production-public/C4_W1_Lab_2_TFS_Docker.md at main · https-deeplearning-ai/machine-learning-engineering-for-production-public · GitHub

But, when I try to run the TensorFlow Serving image, it fails with the error that the platform linux/arm64 does not match with linux/amd64. I have tried adding the --platform linux/amd64 in the docker run command but it still fails.

Need help to complete this lab!

Thank you in advance.

1 Like

Hi Sara, I have the same problem. I looked into it and it looks like TF Serving is not yet supported by the Mac M1 :frowning: Until then I don’t think anyone working on a Mac M1 will be able to do some of these labs that involve TF Serving. You can follow the issue here for updates: Apple M1 support · Issue #1816 · tensorflow/serving · GitHub

1 Like

Hello,

Here is a solution that I found to work for fixing the TensorFlow Serving Issue on M1. In C4_W1_Lab2 I was able to successfully complete the Lab using the DockerImage:

https://hub.docker.com/layers/tensorflow-serving/emacski/tensorflow-serving/latest/images/sha256-917eec76f84791a7cf227dd030a9f7905440f220ade7d3dd4d31a60100fd44fd?context=explore

Within tfserving github repo that you clone for the lab you need to edit the Dockerfile at:
/tfserving/serving/tensorflow_serving/tools/docker

#ARG TF_SERVING_BUILD_IMAGE=tensorflow/serving:${TF_SERVING_VERSION}-devel

ARG TF_SERVING_BUILD_IMAGE=emacski/tensorflow-seving:${TF_STRING_VERSION}

Then when you run the docker command to run the container edit the command to match the new image name:
`bash
docker run --rm -p 8501:8501
–mount type=bind,
source=/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu,
target=/models/half_plus_two
-e MODEL_NAME=half_plus_two -t emacski/tensorflow-serving:latest &’

After using these alterations I was able to run the container and successfully use curl to request a prediction from the model on the server.
I haven’t tried it on further tensorflow serving labs but it may be an adoptable solution to the issue.

Evan

1 Like