C4_W2_Lab_2_Intro_to_Kubernetes: cURL too slow

The cURL post request is executing too slowly to be able to properly load the pod enough for scaling to kick in.

I’ve substituted the minikube ip with the post command in the request.sh file just to make sure that is executing quickly.

If I time the post command by itself it seems to run very slowly:

time curl -d ‘{“instances”: [1.0, 2.0, 5.0]}’ -X POST $(minikube ip):30001/v1/models/half_plus_two:predict

{
“predictions”: [2.5, 3.0, 4.5
]
}curl -d ‘{“instances”: [1.0, 2.0, 5.0]}’ -X POST 0.00s user 0.00s system 1% cpu 0.594 total

0.6 seconds per command will not flood the pod.

I’m running this on Mac Big Sur v11.6 curl version 7.64.1.

Any ideas to remedy this?

Just adding on to say that I also am running into the same issue. I’ve tried various methods to attempt to trigger horizontal scaling but unfortunately I haven’t found a way to get the cpu usage to budge past 0%. Not sure where to go from here either.

I’m having the same issue, running on macOs Monterey 12.2.1, VirtualBox 6.1, minikube v1.25.1. Also ‘kubectl top pod’ gives me zero cpu-usage.

If I adjust the request.sh-script (reducing the sleep-time and turning off output to screen / putting the request into background by simply appending an ‘&’), I can easily get my machine close to breakdown. Hence, it doesn’t really seem to be a problem of sending too few requests.

Also, my Macbook gets really slow when I do this. That’s strange since the VirtualMachine should only use the resources given it to it (2 cores and tested also with cpu-limitations). However, checking the ‘activity monitor’, it consumes much more resources. Because of that I guess that it’s rather a problem with VirtualBox, perhaps in combination with MacOs.

I’d be really happy if someone knows a workaround! Thanks!

Short info: Same problem occurs also using Docker instead of VirtualBox on my machine

I updated curl to version 7.77.0 and significantly reduced the post execution time so don’t believe the lack of request load is the issue.

I can confirm that with the reported memory usage:
image

I confirm this with the cli:

The Kube is reporting CPU usage:

I can use the pod shell to see that it has a small cpu load serving requests:

That does not seem large enough to trigger autoscaling, but it should at least register in the cpu metric. Would really like some help on this!

I just met this problem.