C4_W1 Issue with deployment in Back Vertex AI Workbench Notebook: Qwik Start

Hello all,

I am facing the following issue in Step 3.3 Deploy your model to support prediction.
Actually, I faced this problem in C3_W5 Assignment too.
Looking for your help!

Using endpoint [https://ml.googleapis.com/]
ERROR: (gcloud.ai-platform.versions.create) FAILED_PRECONDITION: Framework can not be identified from model path.
---------------------------------------------------------------------------
CalledProcessError                        Traceback (most recent call last)
Cell In[30], line 1
----> 1 get_ipython().run_cell_magic('bash', '', '\nOUTPUT_PATH=gs://$BUCKET_NAME/$JOB_ID\nMODEL_BINARIES=$OUTPUT_PATH/keras_export/\ngcloud ai-platform versions create v1 \\\n--model $MODEL_NAME \\\n--origin $MODEL_BINARIES \\\n--runtime-version $TFVERSION \\\n--python-version $PYTHONVERSION \\\n--region=global\n')

File /opt/conda/lib/python3.9/site-packages/IPython/core/interactiveshell.py:2478, in InteractiveShell.run_cell_magic(self, magic_name, line, cell)
   2476 with self.builtin_trap:
   2477     args = (magic_arg_s, cell)
-> 2478     result = fn(*args, **kwargs)
   2480 # The code below prevents the output from being displayed
   2481 # when using magics with decodator @output_can_be_silenced
   2482 # when the last Python token in the expression is a ';'.
   2483 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):

File /opt/conda/lib/python3.9/site-packages/IPython/core/magics/script.py:154, in ScriptMagics._make_script_magic.<locals>.named_script_magic(line, cell)
    152 else:
    153     line = script
--> 154 return self.shebang(line, cell)

File /opt/conda/lib/python3.9/site-packages/IPython/core/magics/script.py:314, in ScriptMagics.shebang(self, line, cell)
    309 if args.raise_error and p.returncode != 0:
    310     # If we get here and p.returncode is still None, we must have
    311     # killed it but not yet seen its return code. We don't wait for it,
    312     # in case it's stuck in uninterruptible sleep. -9 = SIGKILL
    313     rc = p.returncode or -9
--> 314     raise CalledProcessError(rc, cell)

CalledProcessError: Command 'b'\nOUTPUT_PATH=gs://$BUCKET_NAME/$JOB_ID\nMODEL_BINARIES=$OUTPUT_PATH/keras_export/\ngcloud ai-platform versions create v1 \\\n--model $MODEL_NAME \\\n--origin $MODEL_BINARIES \\\n--runtime-version $TFVERSION \\\n--python-version $PYTHONVERSION \\\n--region=global\n'' returned non-zero exit status 1.

The Code which I ran:

%%bash

OUTPUT_PATH=gs://$BUCKET_NAME/$JOB_ID
MODEL_BINARIES=$OUTPUT_PATH/keras_export/
gcloud ai-platform versions create v1 \
--model $MODEL_NAME \
--origin $MODEL_BINARIES \
--runtime-version $TFVERSION \
--python-version $PYTHONVERSION \
--region=global

Thanks for bringing this up. The staff have been notified about the broken lab.

Since you created the notebook on the latest LTS version of tensorflow, the local version of the model was created on version 2.6.5. As far as deployment using ai-platform is concerned, using 2.6 (The current model runtimes are listed here ) is a good choice since it’s the closest option to 2.6.5. This gets the lab to work.

Set os.environ["TFVERSION"] = "2.6" instead of os.environ["TFVERSION"] = "2.1" in the notebook.

3 Likes