I think my answer should be correct

assignment link removed as it is against community guidelines. please refer faq section for better understanding on how to use forum

screenshot removed as posting graded quiz correct or incorrect, is against community guidelines

right here it says that my answer is incorrect, now I understand that larger datasets requires more time but isn’t it when I plug in a better GPU/CPU It takes less than with a lower GPU/CPU?

and what should the other answer be and why?

And also
screenshot removed as posting graded quiz correct or incorrect, is against community guidelines

right here it says I didn’t select all answers I know the third choice is correct but isn’t it not relatable to the question?
and if it is relatable then why?

@seif_sherif sorry, I’m having a little bit of a misunderstanding in trying to get what you are saying about the first question. But as to the second, they are speaking towards ‘what are the advancements of deep learning’.

The sort of ‘hidden’ part of the question is they are talking ‘compared to traditional machine learning’. That is an ‘unstated context’. I guess it presumes you have taken MLS (or similar-- because I didn’t, or at least not here).

*looking at this again, yes with a GPU it will be faster-- but it is still a matter of scale. A farmer buys a faster, fancy tractor, but still tilling a large field is going to still take more time than a much smaller one (only less than before).

1 Like

for first query it is

Idea==>Code ==>Experiment, so your fourth option mention experimenting is faster to get an idea.

Also larger dataset, doesn’t make iteration faster but the model which is created using a larger dataset may or may not iterate faster based on the architecture you create or the parameters you used.

for your second query, the question wants you to select all the best options which play major role in better or higher performance of deep learning algorithms, so you chose two options which are correct. I cannot directly answer if the third option you are mentioning is correct or incorrect, but if you go back in the week and see videos again you will understand some of major role played in a dataset is features as it helps the algorithm one creates to understand the complexity in dataset based on how you have selected the features specific parameters in your algorithm.

Hope these hints helps you to find the correct Answer. Also I would advise you to go through videos again where prof. ng mentions algorithm performance related topic.

1 Like

Not until we get some Quantum variant of all this :rofl:.

Quantum, strictly, is not ‘traditionally’ faster-- But it is, rather, and very much so, everything at once.

Which… oddly… brought back into my mind the title of this film just as I type this all of a sudden. Not related, in content, obviously, but I wondered if that is what they were thinking in coming up with the title:

2 Likes

thank you for your answer. The question itself asks in case we get a better GPU/CPU and if we do; we can run larger dataset I presume.
do you mean to say that what causes the iterations to be faster is only the model or did I get something wrong?

Actually the idea of using quantum computing in AI really interests since now you talked about it.
I’ve read into it but couldn’t dive deeper into because I don’t know where to look
if you have any idea where can I start please guide me to it :blush:

iterations if it was only focused better GPU/CPU and large dataset, then probably people wouldnt require to understand model architecture significance in relationship to features/class, data spread, split of data.

i am not stating that option is incorrect but more precisely it is incomplete, so for iterations process to be faster, requirements would be first a good cpu/gpu, then ofcourse as much as dataset that is larger dataset, then choosing the precise features or class based on the model we want to create and relative parameters that aligns for model algorithm to understand the complexity in the larger dataset.

2 Likes

@seif_sherif I am not sure anyone has worked out the theory of how this might work yet. For one it is simply ‘not that simple’. Quantum is not simply ‘faster traditional computing’-- It is a completely different paradigm.

One of the biggest obstacles I see (as far as I know) is all neural nets developed so far are essentially ‘linear’ in manner/order, be it forward prop or back prop. Meaning, you process the weights for one layer, and from there, you move on to the next.

Realistically, to make use of the true advantages of Quantum, you cannot do this-- As stated, you have to frame your problem in terms of a sort of ‘all at once’ solution.

Thus, from the get go, the entire problem would have to be theoretically reworked. Plus, with the exception of IBM possibly, the problem of noise/errors in solution is a huge one. For each qubit we have in reality, they are finding you need tons of supporting hardware just to reduce the error rate, and in some ways this sounds impractical.

While, upfront, Quantum might sound like a ‘good’ solution for AI, in the end, we’re going to have to solve both the technical problem of scale and a complete rethinking of how we do deep learning-- As we do it now, it is just not going to work. There is no ‘easy translation’.

However, if you’d like to learn more generally about how Quantum Computing is ‘supposed’ to work-- and it removes all illusion of the fluff, I’d highly recommend Terry Rudolph’s Q is for Quantum. The book is for free on his website, but I bought the paper copy and read it during a difficult period in my life.

If you are interested in the subject, I am sure you will enjoy it.

1 Like

Thanks for the link to Terry Rudolph’s website! I just watched his initial 22 minute lecture on so-called “Pete boxes”. As he was going through it, I thought the point was going to be some form of “entanglement”, but that apparently wasn’t the point. I’m not sure what actual quantum mechanism he is describing there, but I’m now both confused and curious to learn more. :laughing:

@paulinpaloalto it is a really good book Paul. Even as you probably have the math skills to truly understand quantum mechanics.

In my mind, aside from the ‘fluff’ in the press, you understand ‘what is possible’ and 'what is not possible.

Or, again, if we ever design a machine reliable enough, this is what we can expect.

As to memory serves, reading this, also-- It reminded me of my Junior year in high school when my father hooked a high paying job in Germany, so we moved to Munich and attended MIS for part of the year. And in teaching our classes our teacher pulled out a laser (consider this was like circa… oh goodness, now I have to think. Perhaps 1988. And we had nothing like that at the time at the previous public school I attended. They couldn’t afford this stuff. But seeing Einstein’s classic wave/particle duality right in front of my face was pretty cool, and probably part of why I am so weird).

And he does get to all the complexities about ‘entanglement’ in later chapters-- But those are really only the artifacts the press are impressed with, ‘spooky action at a distance’, etc.

Promise you too, it really is a good read. And, a recognition of why this problem is so hard.

I mean I responded to our friend @seif_sherif. I think it is interesting too, but… Perhaps ‘not so simple’. We don’t even have a true single example of ‘quantum supremacy’ yet, so… Expectations ought be tempered a little.

1 Like