I’ve been reading the article “Protect Your Deep Neural Networks from Piracy” lately. In this study, loss function is defined as custom. In this study, xr is the raw input and xp is the processed input, that is, the raw data with perturbation added. So thus

x_p = G(x_r). F(·) denotes the anti-piracy DNN.

The loss function E is defined:

E= \alpha E_p + \beta E_r + \gamma \lVert \mathbf{x_p -x_r} \rVert_2^{2}

where the loss for x_p is defined by

E_p = -\sum_{i=1}^{N} p_ilogq_{p,i}

where N is the number of class, the vector (p_1, p_2, ...p_N ) is the one-hot encoding ground truth, and the vector (q_{p,1}, q_{p,2}, ...q_{p,N} ) is the softmax output of F(x_p) and

the loss for x_r is defined by

E_r = \sum_{i=1}^{N} p_iq_{r,i}

where N is the number of class, the vector (p_1, p_2, ...p_N ) is the one-hot encoding ground truth, and the vector (q_{r,1}, q_{r,2}, ...q_{r,N} ) is the softmax output of F(x_r).

I’m having trouble creating a custom loss function based on these given values.