how I know that chosen bias and weights is the most efficient for gradient descent for every single neuron as it has logical regression by itself ?

we train every single neuron to get best bias and weights. Because Prof. Andrew in lectures

and in labs we put bias and weights and don’t know what is going behind the scene.

It’s the same process as for linear and logistic regression.

The gradients of the cost equation are used to perform a method like gradient descent, so that the weights and biases are found which give the minimum cost.

I Know this ,bro . But in this week the model has not been trained . we choose weights and biases. I ask when I build my model I need to train every single neuron because in this week there is no issue about how fit the curve for every neuron in every layer.

Week 2 of the course discusses neural network training.

The great time-saver about TensorFlow and other advanced tools is that they have built-in code for performing backpropagation to compute the gradients. You don’t have to write this code yourself - it is provided automatically within the layer definition.

When you “fit” a TensorFlow model, it automatically performs the optimization and computes the best weight and bias values.

Hey Abdelmonem Marei,

When training a deep network, our goal is to find optional weights for **the entire model**.

It means that we don’t need to estimate the best bias and weights for every single neuron. Some neurons may even get disabled (i.e. their weights become zero).

We use a cost function to determine a direction for the optimal solution (e.g. the gradient descent follows the path that leads to a lower cost).

We can analyze gradients of each neuron to reason about the “usefulness” of any particular neuron (i.e. how much attention the network pay to outputs of the neuron with respect to the particular input).

If you have further questions, please feel welcome to ask.

Thank you bro for your time and your attention . when I started week 2 I find prof Andrew explain my question in video " Training Details". so I Know the answer.

Thank you Mr. Manifest for your time and your attention . when I started week 2 I find prof Andrew explain my question in video " Training Details". so I Know the answer.

Glad to see this will be discussed. Have to say that I was a bit confused by this same thing. It was even more confounding when I thought that this was going to be answered in the python implementation.

I feel that the logical progression of course 1 was much easier to follow than this course, but either way I’m looking forward to the NN training lessons.