Problems using BridgeTower Local in Multimodal RAG without PredictionGuard

I need help. I am currently studying Multimodal RAG: Chat with Videos

In the course, there is a use of bridgetower-large-itm-mlm-itc using predictionguard. When I want to try it on a local laptop, following all the examples in the course, I am currently working on the chapter L4_Multimodal Retrieval from Vector Stores. I am having trouble with bridgetower-large-itm-mlm-itc using predictionguard, which I do not have an API KEY for. So I searched for information on huggingface and found BridgeTower/bridgetower-large-itm-mlm-itc ยท Hugging Face. But the next problem I encountered is how do I make a function to solve this problem?

# helper function to compute the joint embedding of a prompt and a base64-encoded image through PredictionGuard
def bt_embedding_from_prediction_guard(prompt, base64_image):
    # get PredictionGuard client
    client = _getPredictionGuardClient()
    message = {"text": prompt,}
    if base64_image is not None and base64_image != "":
        if not isBase64(base64_image): 
            raise TypeError("image input must be in base64 encoding!")
        message['image'] = base64_image
    response = client.embeddings.create(
        model="bridgetower-large-itm-mlm-itc",
        input=[message]
    )
    return response['data'][0]['embedding']

Can you suggest how I should modify the function to successfully use bridgetower-large-itm-mlm-itc locally?