I got the same response as well, and as a result the HPA didn’t work. This means the autograder will fail on the last task of monitoring the load test.
After further investigation, it seems that the latest tf-serving image is the cause.
The latest image on Docker hub was updated on Aug 30.
To fix this, you need to change the image (to something like 2.8.0) on the deployment manifest.
After that change, your tf-serving/deployment.yaml should look something like this:
Please note that you need to do this after passing the assessment of “Task 5. Creating TensorFlow Serving deployment”, because it seems that the assessment also check the tf-serving version.
So in short, what you need to do is:
Do as the instruction says up to assessment on task 5 completed.
Update the deployment manifest as shown above, and apply it one more time i.e. kubectl apply -f tf-serving/deployment.yaml
The rest is as instructed in the lab.
Until we get official clarification, I hope this helps.
curl -d @locust/request-body.json -X POST http://${EXTERNAL_IP}:8501/v1/models/image_classifier:predict
Warning: Couldn’t read data from file “locust/request-body.json”, this makes
Warning: an empty POST.
{
“error”: “JSON Parse error: The document is empty”
I have not cleared task 5 despite passing everything.
Then task 6 and task 7 get cleared. But in task 8 get error despite declaring the external ip
EXTERNAL_IP=34.68.208.107
curl -d @locust/request-body.json -X POST http://${EXTERNAL_IP}:8501/v1/models/image_classifier:predict
Warning: Couldn’t read data from file “locust/request-body.json”, this makes
Warning: an empty POST.
{
“error”: “JSON Parse error: The document is empty”
Completed the entire exercise with load test.
Yet the grader not assessing forTask 5 despite creating deployments
student_01_ed20a4bdd641@cloudshell:~/tfserving-gke (qwiklabs-gcp-01-fcba78b1c319)$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
image-classifier 1/1 1 1 53m
Hi everyone! We’ve reported the new bug to our partners so it can be fixed. In the meantime, you can skip the checkpoint for Task 5 after you see that the deployment is READY (i.e. 1/1). You will still be able to complete the lab and get a passing score of 85/100. More importantly, you will still see how to set up autoscaling in your model deployments.
Lab scores are not shown on your public certificates. However, in case you want to get 100/100 without waiting for the official fix, you can follow this workaround:
Complete all tasks (i.e. up to Task 12) to get 85/100.
Return to the Terminal and navigate outside the locust folder: cd ...
You should now be inside the ~/tfserving-gke directory. Here, you can terminate the deployment: kubectl delete -f tf-serving/deployment.yaml
Open the Cloud Editor and navigate to tfserving-gke/tf-serving/deployment.yaml
Edit line 34 from image: "tensorflow/serving:2.8.0" to image: "tensorflow/serving"
Save the file.
Go back to the Terminal and start the deployment: kubectl apply -f tf-serving/deployment.yaml
Wait for 5 minutes and click the Task 5 checkpoint. It should now be marked as passed. Note: In my attempt, the checkpoint passed even if the deployment was not ready (i.e. shown as 0/1 when you do kubectl get deployments)
Hope this helps. Temporarily marking this as the solution for visibility. Will update this thread once the bug is fixed. Thank you!