L3 Image generation app: GRADIO interface


I am running this lab in my MacBook Pro.

After running this piece of code:
import gradio as gr

#A helper function to convert the PIL image to base64
#so you can send it to the API
def base64_to_pil(img_base64):
base64_decoded = base64.b64decode(img_base64)
byte_stream = io.BytesIO(base64_decoded)
pil_image = Image.open(byte_stream)
return pil_image

def generate(prompt):
output = get_completion(prompt)
result_image = base64_to_pil(output)
return result_image

demo = gr.Interface(fn=generate,
inputs=[gr.Textbox(label=“Your prompt”)],
title=“Image Generation with Stable Diffusion”,
description=“Generate any image with Stable Diffusion”,
examples=[“the spirit of a tamagotchi wandering in the city of Vienna”,“a mecha robot in a favela”])


The program run ok, but two problems occurs (tried locally on Safari, remotely on Edge and Chrome):
Problem 1:When I click in any of the examples sentences, it don’t appear in Your prompt box. I need to type;
Problem 2: Appear a big and red Error message in place of the image. I’ve been tested several different texts in addition of this two examples with the same result.

All suggestions are welcome !



Even i am not sure what is the issue here , but i am on a Macbook pro and ran into similar issue , can’t same as you didn’t share any error message.

for me the error is coming from below line, when it is trying to decode from utf-8

return json.loads(response.content.decode("utf-8"))

I changed it to

return response.content

and also below change in base64_to_pil

def base64_to_pil(img_base64):
    base64_decoded = base64.b64decode(img_base64)
    byte_stream = io.BytesIO(img_base64) #Instead of decoded value i passed base64
    pil_image = Image.open(byte_stream)
    return pil_image

this line is not required, we can use the input directly to Bytestream as above
base64_decoded = base64.b64decode(img_base64)

besides your error could also be a model problem as you have not shared the error message , i am not sure.
If it is something like this,

{"error":"Model runwayml/stable-diffusion-v1-5 is currently loading","estimated_time":20.0}

it is a model issue from Huggingface i guess. and i am assuming it is a temporary porblem.

Hi Raja_Sekhar.

First of all, thank you for replying.

Just to update you about this problem, I decided to run the same code in Google colab and it works fine !

This lead me to think that is something wrong with gradio and my local environment.

Following the output of print(os.environ()):
environ({‘__CFBundleIdentifier’: ‘com.apple.Terminal’, ‘TMPDIR’: ‘/var/folders/0b/k2rqhkbd14bf6rxgpm_ltnx40000gn/T/’, ‘XPC_FLAGS’: ‘0x0’, ‘LaunchInstanceID’: ‘0F5B1071-56D4-4897-A633-4E90B34E1C17’, ‘TERM’: ‘xterm-color’, ‘SSH_AUTH_SOCK’: ‘/private/tmp/com.apple.launchd.RplAdG6KGN/Listeners’, ‘SECURITYSESSIONID’: ‘186b3’, ‘XPC_SERVICE_NAME’: ‘0’, ‘TERM_PROGRAM’: ‘Apple_Terminal’, ‘TERM_PROGRAM_VERSION’: ‘447’, ‘TERM_SESSION_ID’: ‘EE1E36B0-8C9D-46F9-8CC6-4DF2AF1D663C’, ‘SHELL’: ‘/bin/zsh’, ‘HOME’: ‘/Users/fabiosilva’, ‘LOGNAME’: ‘fabiosilva’, ‘USER’: ‘fabiosilva’, ‘PATH’: ‘/Users/fabiosilva/miniforge3/envs/dsa_python/bin:/Users/fabiosilva/miniforge3/condabin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin’, ‘SHLVL’: ‘1’, ‘PWD’: ‘/Users/fabiosilva/Downloads/_python/DLAI’, ‘OLDPWD’: ‘/Users/fabiosilva/Downloads/_python’, ‘CONDA_EXE’: ‘/Users/fabiosilva/miniforge3/bin/conda’, ‘_CE_M’: ‘’, ‘CE_CONDA’: ‘’, ‘CONDA_PYTHON_EXE’: ‘/Users/fabiosilva/miniforge3/bin/python’, ‘CONDA_SHLVL’: ‘2’, ‘CONDA_PREFIX’: ‘/Users/fabiosilva/miniforge3/envs/dsa_python’, ‘CONDA_DEFAULT_ENV’: ‘dsa_python’, ‘CONDA_PROMPT_MODIFIER’: '(dsa_python) ', ‘CONDA_PREFIX_1’: ‘/Users/fabiosilva/miniforge3’, ‘GSETTINGS_SCHEMA_DIR_CONDA_BACKUP’: ‘’, ‘GSETTINGS_SCHEMA_DIR’: ‘/Users/fabiosilva/miniforge3/envs/dsa_python/share/glib-2.0/schemas’, ‘XML_CATALOG_FILES’: ‘file:///Users/fabiosilva/miniforge3/envs/dsa_python/etc/xml/catalog file:///etc/xml/catalog’, ‘LC_CTYPE’: ‘UTF-8’, '’: ‘/Users/fabiosilva/miniforge3/envs/dsa_python/bin/jupyter’, ‘PYDEVD_USE_FRAME_EVAL’: ‘NO’, ‘JPY_SESSION_NAME’: ‘/Users/fabiosilva/Downloads/_python/DLAI/L2_Image_captioning_app.ipynb’, ‘JPY_PARENT_PID’: ‘2643’, ‘CLICOLOR’: ‘1’, ‘FORCE_COLOR’: ‘1’, ‘CLICOLOR_FORCE’: ‘1’, ‘PAGER’: ‘cat’, ‘GIT_PAGER’: ‘cat’, ‘MPLBACKEND’: ‘module://matplotlib_inline.backend_inline’})

Any idea of what can I do to fix that ?


you still didn’t provide which cell are you running and what is the exact error that you see , there are couple of errors i posted above , if it is not those than we need to check the error to understand the issue better.

Raja_Sakar, I’ve tried to change the return response.content and base64_to_pil as you said without success.
At the end what I received is:

Input Payload


“data”: [

ERROR : string, // represents text string of ‘Your prompt’ Textbox component



Response Object


“data”: [

string, // represents image data as base64 string of ‘Result’ Image component


“duration”: (float) // number of seconds to run function call


There is other way to generate and show the error messages ?


Sorry , i am not able to help you , run the online version of the notebook and your local version , debug the values and see, where you are going wrong. If there is specific error message as i pinged above , we can try to resolve it or else we can’t understand why it is not working.


At the end, I executed pip uninstall gradio and pip install gradio and the problem was solved.

Well, thank you Raja_Sekhar for the responses.



I got an error of decode ‘utf-8’. My problem solved, follow @Raja_Sekhar suggestion. Thanks!