I’m trying to make a neural network without using any deep learning library that recognizes numbers in the mnist database. Its structure is: 784 input neurons, 10 hidden neurons (only 1 hidden layer) and 10 output neurons. There’s 10 biases for the hidden layer.

I think I know how to update the last layer weights, but not the first ones as the last layer’s weights influence the result. I dont know how to update biases neither. If I made any mistake in the last layer update, please let me know.

Here’s the code:

```
#forward propagation
def forward(inp, w1, w2, biases):
hidsRes = []
outRes = []
for i in range(len(w1)):
n = np.dot(inp, w1[i])
n += biases[i]
n = relu(n)
hidsRes.append(n)
for i in range(len(w2)):
n = np.dot(hidsRes, w2[i])
outRes.append(n)
return softmax(outRes)
#backpropagation
def back(avgResult, w1, w2, lr):
for i, w in enumerate(w2):
w2[i] += lr * avgResult[i] #I only update the last layer based on the average error of each neuron
def train(inps, hids, outs, randomWeightDiff, batchs, gens, lr):
w1, w2, b = initNn(inps, hids, outs, randomWeightDiff)
#loading the mnist dataset
x_train, x_test, y_train, y_test = getData()
for gen in range(gens):
errors = []
x_train, y_train = shuffle(x_train, y_train)
for batch in range(batchs):
prediction = forward(tolist(x_train[batch].tolist()), w1, w2, b)
y = y_train[batch]
target = [0 if i != y else 1 for i in range(10)]
errors.append([prediction[i] - target[i] for i in range(10)])
print(errors)
avg = [sum([errors[i][j] for j in range(len(errors))]) / 10 for i in range(10)]
back(avg, w1, w2, lr)
print("Generation {gen} \n" + f"{avg}")
train(784, 10, 10, 2, 100, 1000, 0.01)
```

I tried simulating a lot of neural networks and mutating the best ones, but it was too slow and it was not working.

By the way, I didn’t learn advanced maths yet.