There was an issue to execute responses_4 = multimodal_model.generate_content(contents_video, stream=True). The issue caused the following errors to print the stream results.
It’s possibly just worthwhile adding that the only real reason the course was designed that way originally was that there weren’t as many multimodal models available;
Now that for instance Gemini 1.5 Flash is multimodal, you may not need to jump through as many hoops
Thanks for writing; I understand of course that part of the reason for the course design was induction into that side of the ecosystem; I think it’s perfectly fair to say however that there would have been a combination of reasons
After all, we notice that the notebooks are optional; that optionality seemed to have indicated the billing was a major reason for that (thus attendant factors would have been relevant in course design decisions)
I may not have worded my shorthand original message perfectly, I think I meant to say simply that progress is all moving very fast now so that people who have no particular desire to jump through those particular hoops just so as to learn that particular (more or less larger scale / commercial) use case, may be pleasantly surprised to see how it now seems it is not strictly necessary / it seems perfectly possible to complete the course without billing; even after just a few months.
TLDR; s/the only real reason/one obvious reason given the state of progress/
Saying that, I need to check whether I’m reaching some kind of context limit in L5 with the very large video, strange empty responses