How to test my own implementation correctness?

Hello. I’ve done the course and am trying to implement everything by myself from scratch. What is the best way to make sure all my calculations are OK? I was thinking maybe of some small input data, and a way to check if all parameters along the way are OK. Sort of unit test so I could check if my W, b, cost, derivatives, etc. are as expected. Could even be manual, just to see if it’s right. Does any resource like that exist, either here or anywhere else?

hi @wst, that’s seriously a great question and it is my conviction that testing of AI models is still a topic that is underrated. There is some information on user testing of AI implementations (mainly borrowing from traditional user testing, and testing standards for data science projects such as CRISP-DM), but not much on unit testing, and even for user testing the information and effort spend on a proper methodology is not their yet; hopefully this will change the coming years.

For the smaller projects I am doing, I do exactly the same as you do, i.e. make small calculation steps and with simple test data check the input and output.

There are some more generic functions/libraries in Python in which you can go further in unit testing. Most of these function you can build/copy/reuse so that they should work for your future projects. An interesting article on this is e.g. How to Trust Your Deep Learning Code | Don’t Repeat Yourself (krokotsch.eu)

Hope this helps.

2 Likes

There is one concrete error checking method which Prof Ng teaches us in Course 2, which is called Gradient Checking. You use “finite differences” to verify that your gradient calculations are correct. Please stay tuned for that in Course 2.

1 Like

Thank you, both. I’ll check the article and definitely looking forward to Course 2.