Running docker images made for linux/amd64 on Apple M1

Hi,

I am going through week1’s ungraded lab regarding running Tensorflow Serving with Docker. After executing the suggested docker-run command (see below), I got errors associated with platform mismatch.

I was wondering if anyone has tried running TF-related docker images on a linux/arm64 platform (like MacBook Pro Apple M1) and how you resolve this platform mismatch issue. I have not found a simple solution. Thank you.

docker run --rm -p 8501:8501 --mount type=bind, source= …, target=…,
-e MODEL_NAME=half_plus_two -t tensorflow/serving &

~>

WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/generated_message_reflection.cc:2345] CHECK failed: file != nullptr:
terminate called after throwing an instance of ‘google::protobuf::FatalException’
what(): CHECK failed: file != nullptr:
qemu: uncaught target signal 6 (Aborted) - core dumped
/usr/bin/tf_serving_entrypoint.sh: line 3: 9 Aborted

4 Likes

This is an open issue on tensorflow serving forums. Looks like we just need to wait for M1 Mac to be supported.

Hello,

Here is a solution that I found to work for fixing the TensorFlow Serving Issue on M1. In C4_W1_Lab2 I was able to successfully complete the Lab using the DockerImage:

https://hub.docker.com/layers/tensorflow-serving/emacski/tensorflow-serving/latest/images/sha256-917eec76f84791a7cf227dd030a9f7905440f220ade7d3dd4d31a60100fd44fd?context=explore

Within tfserving github repo that you clone for the lab you need to edit the Dockerfile at:
/tfserving/serving/tensorflow_serving/tools/docker

#ARG TF_SERVING_BUILD_IMAGE=tensorflow/serving:${TF_SERVING_VERSION}-devel

ARG TF_SERVING_BUILD_IMAGE=emacski/tensorflow-seving:${TF_STRING_VERSION}

Then when you run the docker command to run the container edit the command to match the new image name:
`bash
docker run --rm -p 8501:8501
–mount type=bind,
source=/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu,
target=/models/half_plus_two
-e MODEL_NAME=half_plus_two -t emacski/tensorflow-serving:latest &’

After using these alterations I was able to run the container and successfully use curl to request a prediction from the model on the server.
It Also works on the Ungraded Lab in C4W2_UngradedLab_IntrotoKubernetes.

Evan

5 Likes

That was really helpful, thanks a lot!

Thanks Evan. Worked like a charm!