Hi.

Writing the homework I found a mistake comparing floats…

The test at **optimize function** is comparing this:

AssertionError: Wrong values for costs. [5.801545319394553, 0.31119723638668084] != [5.80154532, 0.31057104]

In practice is doing == by number, but those numbers are floats.

The other question is testing the ‘model’, the shape of X_train is just (4,7) and not a sequence of images… so i can’t continue with the homework beyond this point…

What can i do?

BR!

It’s a good point that testing for equality between floating point numbers is a bit tricky: there can easily be different ways to express the same mathematical formula in code that have different rounding behavior. But the implementors of numpy and of the assignment are aware of that issue. Notice that this is the actual assertion that failed in your case:

`assert np.allclose(costs, expected_cost), f"Wrong values for costs. {costs} != {expected_cost}"`

Notice that it’s not testing for exact equality with the == operator in python: it’s calling the function `np.allclose`

. You can find the documentation for that by googling “numpy allclose” to understand how it works. The tl;dr is that is compares for closeness with a threshold distance, so it handles different rounding behavior.

Note that you can always find the actual source for the test cases by clicking “File → Open” and then opening the appropriate file, which is `public_tests.py`

in this case. There’s a topic about this on the DLS FAQ Thread, which is worth a look in general.

Notice that the cost at 0 iterations is correct, so that says your cost logic is correct. But it is the second value after 100 iterations that is wrong. So that means that there is probably something wrong with your update parameters logic. Note that an error in the third decimal place is not a rounding error: it’s a real error. We’re doing 64 bit floating point here, so rounding errors are on the order of 10^{-16}, although they can accumulate in pathological cases.

On the question about the shape of X, we are writing general code here. It should work for any sized input data, as long as the input data is self-consistent. Some of the test cases here use smaller datasets than the “real” image inputs to make the debugging simpler. Are you sure you are not referencing global variables anywhere in your `optimize`

logic? You should also not be hard-coding 12288 as the size anyplace.

Ok, Thanks you. I reload all and fix the code and runs ok the tests.

The last questions, about test the ‘model’, in the test itself generates X as a one matrix (not a serie of ‘images’)… but in the model is supossed to work with a sequence of images, not one alone. So I stucked on this point

I’m not sure I understand your point. The test case constructs an input X matrix. That should be fine. The point is that each column of X is one sample, right? It’s perfectly correct if X has only one column, meaning only one input sample. But the particular test does have more than one input sample. Your code should be general and should be able to handle any number of columns in X.

The other point is that the images are “unrolled” or “flattened” in to a matrix as well. So it’s just a question of what the dimensions of X are. It’s a matrix in either case, right?

Here’s a thread which explains how the “flattening” of the images from 4D arrays into 2D matrices works.