Implementing cnn using numpy

How do we implement the CNN made using Numpy to our image classification dataset.

Please do the 1st assignment in course 4, week 1 where you’ll implement both forward and backward passes of conv and pooling layer.
Once you have the conv layer, use it for your model.

In reality, you should stick with tensorflow Conv layer since it uses gpu. The assignment helps get familiar with the conv and pooling layer.

Yes I have completed the assignment but am not getting an idea how to implement it with my model.

Here’s Conv2D and an example

is there any example in numpy?

1 Like

Sorry. I don’t know.

That would be exceedingly difficult. It’s why tools like TensorFlow and PyTorch were created.

I don’t know of any worked examples of doing it in numpy. But Prof Ng lays out examples of different CNN architectures in the lectures. C4 W1 A1, as Balaji, says give you the new tools that you need to implement those: an implementation of conv2d and pooling (either max or average). So then it’s just a slightly more complicated version of what we saw in Course 1 Week 4: you just put those components together into a series of layers. The typical pattern is several conv layers followed by a pooling layer, rinse and repeat, until you get to a point at which you flatten out the output of the last conv or pooling layer, then add a couple of FC layers (you can use the numpy code from C1 W4 for that piece). The only item on that list that we don’t yet know how to do in numpy is the “flatten”, but once you understand what that does it shouldn’t be that complicated to build that yourself. It would just be an appropriate invocation of np.reshape, as discussed on this thread.

But now the problem is you’ve got a whole bunch of choices to make: filter size, number of output channels, how many layers, yadda, yadda. But you have to make those decisions in any case, even if you’re using TF/Keras to build the actual models (as shown in C4 W1 A2).

Of course the back prop part you’ll also need to build if you’re doing this whole thing by hand. You have most of the pieces from C1 W4 A1 and C4 W1 A1. But you can now get a sense based on all that for why Tom says it’s difficult. :scream_cat::nerd_face: That’s maybe the biggest simplification from using a framework: the backprop side of things is just magically handled for you “under the covers” by TensorFlow or PyTorch or your framework of choice. All you need to do is put together the forward propagation side based on the building blocks they give you.

Well, actually, maybe you could say there is a worked example of building this in numpy: look at the source code for TF or PyTorch. :laughing:

I’ve never tried that, but I’m pretty sure it’s all Open Source. You can bet it’s a lot of lines of code! :nerd_face: