I am having trouble moving to work in the Colaboratory environment . I was using Anaconda Navigator, but at a glance Colaboratory made more sense. Howerver, I am now lost. I have already asked for an extension not because of the material, but this environment is so confusing to me.
Fundamentally, I feel like I am flying dark because I am unable to see file manager in the virtual machine. I have one ipynb file for the assignment, and I have juggled it back and forth between running on Colaboratory platform and trying to use a desktop version, without Anaconda Navigator. And I wasted a large body of time trying to figure out the run-local-on-colab as it was advertised.
I have forged my way past create_train_val_dirs(), and when I get to the part where I a write the split_data function, I am fully cognizant of the technicque having done it numerous times in R, I have no idea what those template provided arguments are supposed to mean, specifically SOURCE_DIR, or where they come from.
It seems like the instruction is more process and less architecture. What is wrong with me? I do not see other inquiries in this regard on this discussion.
I was looking for a little more understanding where I donât have it. For example, my wild guess is that SOURCE_DIR is the same as source_path where the files reside after download. Why doesnât it say this anywhere in the instructions. This seems like a basic piece of information needed, yet it goes with the assumption. Is this correct?
Please understand that I have been working on this for at least a week. At this point, I need some help. I have no confidence that if I read one more scrap of documentation, it will all come to light. Since you have such experitise, could you summarize the upshot of your recommended reading?
Though I am still not clear about your particular issue, I am giving you a general idea of how to do your assignments.
You need to code between the block where you just need to write the functions mostly :
### START CODE HERE
....
....
### END CODE HERE
You need to read the hints to understand what you need to code.
After every graded function, there is a test cell - #Test your XYZ function. That will test your function fetching inputs from given directories. So you donât have to think about directories.
I think we are both mistaken because there is no answer to the question we are debating. The code clearly assigns both "/tmp/cats-and-dogs" AND "/tmp/PetImages/." But as far as the logic of the code goes it amounts to a 'bug'.
Dear Nilosree â
I canât make up my mind. So if i download the notebook to my Google Drive. And I open the ipynb project into colab, if I click the hyperlink at /tmp/PetImages/, I get some action in my file manager. Now I feel more like I am dealing with a full deck.
I am sorry for being a pain. Thanks for your effort to help.
Hi, I am trying to launch my App using the instruction from Large Language Models with Semantic Search Course using Colab.
So I started with :
ââ" !pip install cohere, !pip install weaviate-client.ââ"
then:
I had NotFound Error applying when i run the next code
" ââimport os
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env fileââ"
Were you able to solve the code for the grader cell?
If not, please refer to this link. It is an elaborative explanation on how to work upon the two grader cell creating your root_path and split data section.
If any doubt, please ask. I agree doing some assignment can be nerve wracking, whenever such things happens take a break.
Go one grader cell at a time, take break once you have got the desired output.