C4W2 - UngradedLab Intro to Kubernetes

After applying the deployment yaml file, I could see that 0/1 were READY when running the deployment. I looked at logs of the POD created and found the following error:

/usr/bin/tf_serving_entrypoint.sh: line 3: 7 Illegal instruction (core dumped) tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name={MODEL\_NAME} --model\_base\_path={MODEL_BASE_PATH}/{MODEL\_NAME} "@"

Before applying deployment yaml file, i did apply configmap yml.

I am running minikube with virtualbox. My computer’s OS is Windows 10.

Can you please let me know the problem here? Let me know if you need any additional information

Hi Neeraj! Can you retry and post the output of kubectl describe cm tfserving-configs and kubectl get cm tfserving-configs before you apply the deployment? Also, just to make sure, you did not modify anything in the files? If you encounter the error again, please post a screenshot of the screen instead. Thanks!

Hi @chris.favila ,

I used Docker instead of virtualbox and successfully completed the exercise.


Copy. Thank you for the update!


@chris.favila I’m here to express that I am facing a similar issue.
I am running minikube on virtual machine… I’ve been trying to solve this by myself to no avail.
My deployment and config yaml files currently remain unchanged.

hi @neerajkumar I am using Docker instead of Virtualbox for this exercise. After the development through a service, I try to accessing the development as a sanity check but unable to do so. Not sure you have a tip to solve this?

I also tried to use the tunnel to reach the server but unsuccessful… see below…

I found pods status with ‘CrashLoopBackoff’, is this normal? see below.

hi @neerajkumar, Just an update, I managed to resolve the problem with the help of @Th_o_Vy_Le_Nguy_n. Below is a summary of the workaround for sharing.

Thanks @Th_o_Vy_Le_Nguy_n

I had restarted the whole installation from fresh without VirtualBox and docker’s.
Also, most important, I edited the deployment.yaml to increase the memory.limit to 6000M (6G).
I ran a separate Ubuntu window “minikube service tf-serving-service” to open a tunnel to the service.
I managed to run the curl command successful this time and saw the results returned by the model finally! I also completed the rest of the Lab 2 exercise successfully.

Most important 2 things I learned are : (1) how to use ‘kubectl describe all’ and to debug why found the container (tf-serving) got terminated/recreated multiple times - due to memory constraint problem and prompted me to edit the deployment.yaml. (2) given my local Windows11+wsl2+ubuntu setup, I must ran “minikube service tf-serving-service” to have minikube open a tunnel for me to connect the service server for submitting the json inputs.

Warm regards,