# Why Dense Function Uses For Loop instead of Vector Operations

In the Week 1, Lab 3, Coffee Roasting Example; dense function uses a for loop to calculate activations as shown below:

``````def my_dense(a_in, W, b):
units = W.shape[1]
a_out = np.zeros(units)
for j in range(units):
w = W[:,j]
z = np.dot(w, a_in) + b[j]
a_out[j] = g(z)
return(a_out)
``````

However, I don’t understand why it does not use vector operations to calculate activations as below, which would be faster (if I am right):

``````# Define sigmoid function
def sigmoid(t):
return 1 / (1 + np.exp(-t))

# Define deterministic function
def f(X, W, b):
return X @ W + b

# Calculate first layer
a1 = sigmoid(f(X, W, b))
``````

Note: Above code gives the same results.

Hi @edizferit ,

Please check the implementation instructions for Ex2. You will find that the instruction explained that for this exercise, a for loop is used, and vectorization is explored in later section. Please see below:

### Exercise 2

Below, build a dense layer subroutine. The example in lecture utilized a for loop to visit each unit (`j`) in the layer and perform the dot product of the weights for that unit (`W[:,j]`) and sum the bias for the unit (`b[j]`) to form `z`. An activation function `g(z)` is then applied to that result. This section will not utilize some of the matrix operations described in the optional lectures. These will be explored in a later section.

1 Like

Thank you very much @Kic