In neural network architecture how do we assign paramaters w and b to compute the output using activation function?
You could read more about this at Activation functions in Neural Networks - GeeksforGeeks
May this be a positive aid.
The parameters of a neural network (the W and b values) are learned during “training” by using back propagation. That is driven by a cost or loss function that measures how good or bad the predictions are on a given set of data. The derivatives of the cost give you the ability to move towards a better solution.
In the TF courses they assume you already know this type of thing and just show you how to build networks using TF. If you really want to understand how NNs work, it would be a good idea to take the Deep Learning Specialization (DLS) first. If you take just Course 1 of DLS, you’ll get a solid background in how all this really works.
Thanks so much I have done unsupervised learning and know what cost functions are.
I only wanted to know about how in tf they calculated the parameters.
TF does it the same way we do it in DLS Course 1: they randomly initialize the parameters and then they do back propagation and some form of Gradient Descent, but all that logic is hidden “under the covers” in TF. You have some control over how the process works, e.g. you can specify the cost function and the optimization method (Adam, RMSprop, SGD …) to use for the