I was working on the backprop implementation, and based on the instructions:

- reapplying the same mask š·^[1] to dA1.
- divide dA1 by keep_prob

Based on these instructions I believed that the appropriate code would be:

```
`dA2 *= D2
```

dA2 /= keep_prob`

and

`dA1*= D1`

`dA1 /= keep_prob`

respectively. However, the autograder returns the following error messages

Error: Wrong output for variable dA2.

Error: Wrong output for variable dA1.

Error: Wrong output for variable dZ1.

Error: Wrong output for variable dW1.

Error: Wrong output for variable db1.

My forward prop passed the test with no problem, and I made sure I used `np.random.rand`

insted of `np.random.randn`

so Iām just puzzled as to what I did wrong.