@seif_sherif I am not sure anyone has worked out the theory of how this might work yet. For one it is simply ‘not that simple’. Quantum is not simply ‘faster traditional computing’-- It is a completely different paradigm.
One of the biggest obstacles I see (as far as I know) is all neural nets developed so far are essentially ‘linear’ in manner/order, be it forward prop or back prop. Meaning, you process the weights for one layer, and from there, you move on to the next.
Realistically, to make use of the true advantages of Quantum, you cannot do this-- As stated, you have to frame your problem in terms of a sort of ‘all at once’ solution.
Thus, from the get go, the entire problem would have to be theoretically reworked. Plus, with the exception of IBM possibly, the problem of noise/errors in solution is a huge one. For each qubit we have in reality, they are finding you need tons of supporting hardware just to reduce the error rate, and in some ways this sounds impractical.
While, upfront, Quantum might sound like a ‘good’ solution for AI, in the end, we’re going to have to solve both the technical problem of scale and a complete rethinking of how we do deep learning-- As we do it now, it is just not going to work. There is no ‘easy translation’.
However, if you’d like to learn more generally about how Quantum Computing is ‘supposed’ to work-- and it removes all illusion of the fluff, I’d highly recommend Terry Rudolph’s Q is for Quantum. The book is for free on his website, but I bought the paper copy and read it during a difficult period in my life.
If you are interested in the subject, I am sure you will enjoy it.