C4W2 Assignment Autoscaling TensorFlow model deployments with TF Serving and Kubernetes

Hi everyone! We’ve reported the new bug to our partners so it can be fixed. In the meantime, you can skip the checkpoint for Task 5 after you see that the deployment is READY (i.e. 1/1). You will still be able to complete the lab and get a passing score of 85/100. More importantly, you will still see how to set up autoscaling in your model deployments.

Lab scores are not shown on your public certificates. However, in case you want to get 100/100 without waiting for the official fix, you can follow this workaround:

  1. Complete all tasks (i.e. up to Task 12) to get 85/100.
  2. Return to the Terminal and navigate outside the locust folder: cd ...
  3. You should now be inside the ~/tfserving-gke directory. Here, you can terminate the deployment: kubectl delete -f tf-serving/deployment.yaml
  4. Open the Cloud Editor and navigate to tfserving-gke/tf-serving/deployment.yaml
  5. Edit line 34 from image: "tensorflow/serving:2.8.0" to image: "tensorflow/serving"
  6. Save the file.
  7. Go back to the Terminal and start the deployment: kubectl apply -f tf-serving/deployment.yaml
  8. Wait for 5 minutes and click the Task 5 checkpoint. It should now be marked as passed. Note: In my attempt, the checkpoint passed even if the deployment was not ready (i.e. shown as 0/1 when you do kubectl get deployments)

Hope this helps. Temporarily marking this as the solution for visibility. Will update this thread once the bug is fixed. Thank you!