While implementing the classical CNN model, do we need to keep neural network layer fixed (as per the model) ? or we can do some modification such as adding one more hidden neuron layers etc.
I’m not sure what you are specifically asking. Are you specifically referring to the “fully connected” layers that Prof Ng shows as the last few layers in the example CNNs that he shows in Week 1?
In general it is the case that everything about the architecture of the network is eligible to be modified: you can add Conv layers, you can change the filter sizes, strides, add or subtract pooling layers, add or subtract FC layers or change the number of neurons in those layers. Everything is “up for grabs”. As you go through the course, Prof Ng shows lots of examples of different CNN architectures that have proven to be useful for different types of problems. The general approach if you have a new problem that you think is amenable to a CNN is to start by finding some other pre-existing problem with a worked solution that is relatively similar to your problem. Then you start with that architecture as the basis and make changes to adapt it to your particular problem.