Two questions about logistic regresssion task

It’s a good point that testing for equality between floating point numbers is a bit tricky: there can easily be different ways to express the same mathematical formula in code that have different rounding behavior. But the implementors of numpy and of the assignment are aware of that issue. Notice that this is the actual assertion that failed in your case:

assert np.allclose(costs, expected_cost), f"Wrong values for costs. {costs} != {expected_cost}"

Notice that it’s not testing for exact equality with the == operator in python: it’s calling the function np.allclose. You can find the documentation for that by googling “numpy allclose” to understand how it works. The tl;dr is that is compares for closeness with a threshold distance, so it handles different rounding behavior.

Note that you can always find the actual source for the test cases by clicking “File → Open” and then opening the appropriate file, which is public_tests.py in this case. There’s a topic about this on the DLS FAQ Thread, which is worth a look in general.

Notice that the cost at 0 iterations is correct, so that says your cost logic is correct. But it is the second value after 100 iterations that is wrong. So that means that there is probably something wrong with your update parameters logic. Note that an error in the third decimal place is not a rounding error: it’s a real error. We’re doing 64 bit floating point here, so rounding errors are on the order of 10^{-16}, although they can accumulate in pathological cases.

On the question about the shape of X, we are writing general code here. It should work for any sized input data, as long as the input data is self-consistent. Some of the test cases here use smaller datasets than the “real” image inputs to make the debugging simpler. Are you sure you are not referencing global variables anywhere in your optimize logic? You should also not be hard-coding 12288 as the size anyplace.