Neural Network Layers

Hi All!

I just finished the Advanced Learning Algorithms course and had a question about neural networks. Throughout the course, we see examples of neural network layers with varying densities, activation functions, compilers, optimizers, etc. With most of these, such as activation functions and compilers it seems reasonably clear how one would go about determining the optimal choice for applying an ML model. However, I do not have a clear picture of how to determine the density of a layer or number of layers needed for identifying whether an image is a 0 or a 1. How would someone go about determining those two factors? In addition, specifically for neural networks addressing computer vision problems, I am confused as to the information which the nodes in each layer contain. The argument which I have gathered from the courses is that these models build up smaller pieces of information in order to make larger assumptions in subsequent layers. Thus the first layer looks at lines, the second looks at larger shapes, this looks at objects, and so on. However, where in these models is this sort of behavior specified? Do these models do this automatically because it is statistically the easiest path forward? If we don’t specify this behavior, how do we know that these nodes are operating in this way? Is there a way to analyze how each of these nodes individually function?

Theres a lot questions here. If you do the Tensorflow developer and Tensorflow advanced techniques specializations, and MLOPs specialization, you will get some of the answers.

Generally how many layers you have in a network is taken from a similar model, and trial and error, there is no definitive answer. There are techniques presentat at TFAT specialization to create mappings of layer’s pickings of details in the image. The models are built to learn automatically but what they learn you can check.

You should keep on learning!

1 Like

Andrew’s description of the first layers detecting edges and, then combining them into higher concepts is really just to give an intuitive understanding of the overall process.

But the layer weights do not necessarily follow any specific pattern.

In general each layer learns combinations of the features on its input, and summarizes them in a simpler form on the output.

If the lowest-cost solution involves a layer learning to detect edges, that’s what will be learned. But in practice I’ve never seen that happen.

Edge detection as a specific tool is more closely related to image processing than to machine learning.

1 Like

How do we know that edge detection is what is occurring in the lower cost solution? Is that because we trained neural nets to detect edges first then neural nets to detect objects and so on? Or does this all happen in one go?

You don’t know.

No, we don’t have any way to specify what the NN layers learn.


1 Like