C4W1 - no predictions output from curl

The lab seemed to be working up to this point. (Using a MacBook Air)
I ran
curl -d ‘{“instances”: [1.0, 2.0, 5.0]}’
-X POST http://localhost:8501/v1/models/half_plus_two:predict
and I get no output.

The previous command started the server. Here is that command with its output:
docker run --rm -p 8501:8501
–mount type=bind,
-e MODEL_NAME=half_plus_two -t tensorflow/serving &

*******Output Below ************

I tensorflow_serving/model_servers/server.cc:74] Building single TensorFlow model file config: model_name: half_plus_two model_base_path: /models/half_plus_two

2022-11-25 18:48:22.939308: I tensorflow_serving/model_servers/server_core.cc:465] Adding/updating models.

2022-11-25 18:48:22.939481: I tensorflow_serving/model_servers/server_core.cc:594] (Re-)adding model: half_plus_two

2022-11-25 18:48:23.209209: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: half_plus_two version: 123}

2022-11-25 18:48:23.209329: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: half_plus_two version: 123}

2022-11-25 18:48:23.209496: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: half_plus_two version: 123}

2022-11-25 18:48:23.211127: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /models/half_plus_two/00000123

2022-11-25 18:48:23.218762: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }

2022-11-25 18:48:23.219006: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /models/half_plus_two/00000123

2022-11-25 18:48:23.222830: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA

To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

2022-11-25 18:48:23.254769: I external/org_tensorflow/tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:357] MLIR V1 optimization pass is not enabled

2022-11-25 18:48:23.257466: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:229] Restoring SavedModel bundle.

2022-11-25 18:48:23.302257: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:213] Running initialization op on SavedModel bundle at path: /models/half_plus_two/00000123

2022-11-25 18:48:23.309007: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:305] SavedModel load for tags { serve }; Status: success: OK. Took 97851 microseconds.

2022-11-25 18:48:23.310168: I tensorflow_serving/servables/tensorflow/saved_model_warmup_util.cc:62] No warmup data file found at /models/half_plus_two/00000123/assets.extra/tf_serving_warmup_requests

2022-11-25 18:48:23.436147: I tensorflow_serving/core/loader_harness.cc:95] Successfully loaded servable version {name: half_plus_two version: 123}

2022-11-25 18:48:23.436987: I tensorflow_serving/model_servers/server_core.cc:486] Finished adding/updating models

2022-11-25 18:48:23.437218: I tensorflow_serving/model_servers/server.cc:118] Using InsecureServerCredentials

2022-11-25 18:48:23.437465: I tensorflow_serving/model_servers/server.cc:383] Profiler service is enabled

2022-11-25 18:48:23.442315: I tensorflow_serving/model_servers/server.cc:409] Running gRPC ModelServer at …

[warn] getaddrinfo: address family for nodename not supported

2022-11-25 18:48:23.443185: I tensorflow_serving/model_servers/server.cc:430] Exporting HTTP/REST API at:localhost:8501 …

[evhttp_server.cc : 245] NET_LOG: Entering the event loop …

*********** End Of Output ****************
Any help is appreciated.

Does you macbook air have M1 or M2 or intel chip?

I have an older MacBook Air with the intel i5 processor.


The lab works at my end.

I’m unsure if your quoting is correct. Could you please try this command from the terminal?

curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict

The expected otuput is:

    "predictions": [2.5, 3.0, 4.5

Do execute docker container logs -f <container name> as soon as you start the container and share the logs.