W 3_A1_ReLU vs tanh accuracy

Hello Lukas @Lukas_Jusko,

I have done some experiments with this dataset and the same architecture (except using different number of neurons and activations for the hidden layer). I also tried different seeds. To save my work because I am lazy :stuck_out_tongue: , I implemented my experiment with Tensorflow Keras, instead of modifying the assignment.

Hope this can be another starting point for you to further explore about neural networks.

Setting:
learning rate = 0.04
weight initializer = tf.keras.initializers.RandomNormal(mean=0.0, stddev=0.01)
number of iteration = 150000

Activation num of neurons seed=0 seed=1 seed=2
ReLU 16 0.84 0.8275 0.8175
ReLU 64 0.885 (figure 1) 0.87 0.875
Concatenated ReLU 16 0.85 0.865 0.855
Concatenated ReLU 64 0.8975 (figure 2) 0.8825 0.885
tanh 4 0.905 0.9075

Explanation of Concatenated ReLU. It is an effect of Concatenated ReLU that it will double the number of neurons listed in the table.

Cheers,
Raymond

Figure 1
1

Figure 2
2

1 Like