Thanks for the advice. It’s weird because I’ve tried different browsers (chrome and firefox) and I’ve tried restarting my wifi but nothing works. Of course, I’m connected to the internet because I can access other sites like this one, but for some reason just the deeplearning.ai notebooks are not working, everything else is fine. I even tried downloading the notebook, authenticating with my personal GCP credentials, and I was able to run it!
Does anyone know if there’s a way to sort of “refresh” the state of my deeplearning.ai platform? I tried clearing the browser cache of course but no luck
I have the same issue and we checked it using the account of my friend on my laptop - and we don’t have this error there.
So my thought that our accounts somehow were blocked, in my case probably because of my experiments as Andrew Ng recommended in Lesson 1 video (I generated 31 embeddings in ‘for’ loop to check similarity, and after that noticed this error).
So if you also made similar experiments - could you please confirm it if so.
I think I found the reason of our error, the requests are rejected either by some proxy logic of the deeplearning.ai platform or by Vertex AI API itself.
The original error is “{“message”: “quota exceed for /google/embedding/textembedding-gecko@001”}”, but because of some platform’s custom logic in vertexai library, only the key (“message”) is displayed for us.
And also there is a 429 error (“Too many requests”) in HTTP response which also confirms the assumption.
But it seems to be strange reason to block us because even original code uses ‘for’ loop to get embeddings, and in my case I didn’t get much more embeddings for my experiments (31 vectors at most).
So I can show it to technical support, but I don’t have any contacts with it, I wrote 2 messages here (Contact - DeepLearning.AI) but I don’t know if I receive any answer.
Therefore in my opinion the temporary solution is to use another account and do not make experiments too much to be not blocked again, until our accounts will be unblocked. I can provide more detailed information if someone from technical support will contact with me.
Code for getting data on an entire data set
Most API services have rate limits, so we've provided a helper function (in utils.py) that you could use to wait in-between API calls.
If the code was not designed to wait in-between API calls, you may not receive embeddings for all batches of text.
This particular service can handle 20 calls per minute. In calls per second, that's 20 calls divided by 60 seconds, or 20/60.
from utils import encode_text_to_embedding_batched
so_questions = so_df.input_text.tolist()
question_embeddings = encode_text_to_embedding_batched(
sentences=so_questions,
api_calls_per_second = 20/60,
batch_size = 5)
In order to handle limits of this classroom environment, we're not going to run this code to embed all of the data. But you can adapt this code for your own projects and datasets.
So probably it will be very useful to pay attention to it at the very beginning of the course, before the 1st lesson, including the warning that otherwise the account will be blocked to get more embeddings without account recovery later (my account is still blocked from getting embeddings).
Today I found that my second account has also the same issue even though I run only notebook code, and also (e.g., in Lesson 4) it is impossible to save notebook.
Could you please ask your teammates about the contacts of the technical support? Too much time of finding and describing the issue and reasons but without any feedback.
This course worked fine, but during the last days I stumble over the same issues as described here. run_bg_query throws an error, but the data can be loaded from the csv. OK. However, calls to model.get_embeddings also only return ‘message’ and no valid embedding vector. Any idea how I can fix this problem? I have made no change to the notebook besides uncommenting and running the code to load the data from the csv.