I noticed some mistranscriptions in the transcript (subtitles, captions) for the course videos. Below is the errata for the 2nd week. I’ll post the errata for the other weeks in those groups.
The format for the errata is as follows. The approximate time is in braces, then the mistranscribed word or phrase follows, followed by a hyphen, then followed by the correct word or phrase. So like this:
[time] mistranscribed word or phrase - correct word or phrase
Video 1 - tensorflow implementation
[2:55] you people are not - you be able to not
Video 2 - training details
[1:17] Literacy regression - linear regression
Video 4 - choosing activation function
[4:32] fat - flat
[4:56] flats - flat
Video 6 - multiclass
[0:26] protocols - postal codes
Video 7 - softmax
[11:22] new network - neural network
Video 8 - neural network with softmax output
[2:13] good to one - being one
[2:27] super strip - super script
[2:52] open layer - output layer
[3:07] sigma, radial - sigmoid, ReLU
[3:19] value - ReLU
[3:34] rarely - ReLU
[7:03] caveal - caveat
Video 9 - improved implementation of softmax
[6:46] actress - accurate
[7:40] for_logist - for logistic
Video 10 - classification with multiple outputs
[2:41] neurals - neurons
[3:42] Expressively - explicitly
[3:52] To write to - the right tool
Video 12 - additional layer types
[3:09] John Macoun - Yann LeCun
[4:34] learning tosses - learning classes [?]
Video 14 - computation graph
[0:35] deactivation a - the activation a
[0:58] cause function - cost function
[1:30] same as above
[1:42] same as above
[1:59] same as above
[4:00] same as above
[4:22] same as above
[5:27] 2.01 - 2.001
[5:32] 2.02 - 2.002
[14:39] epsilon ball equivalent - epsilon or equivalently
[18:16] punch and 10,000 - 110,000
[18:21] meeting - needing
Video 15 - larger neural network example
[0:09] computation draft - computation graph
[5:06] the map - the math
[6:41] background - backprop