The formula for normalizing the input training data to mu=0 and variance=1 (slide 28) is given as xnorm = (x - mu) / sigma^2, but this is incorrect. The 0-mean input should be divided by the sqrt(sigma^2).
In other words the correct formula for normalized inputs should be:
X_{norm} = (X - \mu) / \sigma
My own experiments in numpy seem to confirm, but would appreciate if someone can confirm.