In section 5.2 of C5W2A1 we are given the following equation:
e_{w1B}^{corrected} = \sqrt{ {1  \mu_{\perp} ^2_2} } * \frac{e_{\text{w1B}}  \mu_B} {(e_{w1}  \mu_{\perp})  \mu_B_2} \tag{9}
I was unable to understand where this equation came from so I went to the source paper:
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
Near the bottom of page 6 I found the following equation:
\overrightarrow{\omega} : = \nu + \sqrt{{1  \nu^2_2}} * \frac{\overrightarrow{\omega}_B  \mu_B} {\overrightarrow{\omega}_B  \mu_B_2} \tag{a}
Substituting in the variable names used in the assignment this would become:
e_{w1B}^{corrected} = \sqrt{{1  \mu_{\perp} ^2_2}} * \frac{e_{\text{w1B}}  \mu_B} {e_{\text{w1B}}  \mu_B_2} \tag{b}
As you can see there are two major differences between the equation (9) and equation (b).

In equation (b) the denominator is clearly the norm of the numerator, which is common in linear algebra if you are trying to resize a vector. But e_{\text{w1B}} \neq (e_{w1}  \mu_{\perp}) so where does the denominator of equation (9) come from?

In equation (b) there are no absolute value signs around the expression under the square root. In the section Word embedding on page 3 of the paper I read that they normalized each word to unit length as is common. This would mean that for them \mu_{\perp} ^2_2 \leq 1 and so the absolute value signs would not be necessary. It also explains why the square root is there at all, as it is needed to ensure that the equalized vector of equation (a) also has unit length. In the assignment this normalization is not done, which makes me wonder whether this algorithm is really valid without it.
I look forward to hearing your thoughts on this