Good day,

I am curious about two things from this exercise. First, how is the decision boundary plotted in section 4 of the exercise? Specifically the plot produced by the plt_nn function. It seems that we first use the argmax function to make discrete predictions such as 0, 1, 2, 3, 4, 5. The prediction is applied to all points in the meshgrid. I went in to the code file and back traced the plot_cat_decision_boundary function. In it levels were not specified in the contour command. I wonder in that case what contour is doing to produce the boundaries. The code is provided below.

Second, for kernel regularization in a neural network, say I impose \lambda_1 and \lambda_2 on layer 1 and 2. with l2 class. Is this saying the loss function is now

\text{original loss function} + \frac{\lambda_1}{2m}\sum_{i}||\mathbf{w}_i^{[1]}||^2+\frac{\lambda_2}{2m}\sum_j||\mathbf{w}^{[2]}_j||^2, where ||-|| denotes the L_2 norm of a vector?

Start of code:

```
def plot_cat_decision_boundary(ax, X,predict , class_labels=None, legend=False, vector=True, color='g', lw = 1):
# create a mesh to points to plot
pad = 0.5
x_min, x_max = X[:, 0].min() - pad, X[:, 0].max() + pad
y_min, y_max = X[:, 1].min() - pad, X[:, 1].max() + pad
h = max(x_max-x_min, y_max-y_min)/200
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
points = np.c_[xx.ravel(), yy.ravel()]
#print("points", points.shape)
#make predictions for each point in mesh
if vector:
Z = predict(points)
else:
Z = np.zeros((len(points),))
for i in range(len(points)):
Z[i] = predict(points[i].reshape(1,2))
Z = Z.reshape(xx.shape)
#contour plot highlights boundaries between values - classes in this case
ax.contour(xx, yy, Z, colors=color, linewidths=lw)
ax.axis('tight')
```