Create Custom Loss Function

I’ve been reading the article “Protect Your Deep Neural Networks from Piracy” lately. In this study, loss function is defined as custom. In this study, xr is the raw input and xp is the processed input, that is, the raw data with perturbation added. So thus
x_p = G(x_r). F(·) denotes the anti-piracy DNN.

The loss function E is defined:

E= \alpha E_p + \beta E_r + \gamma \lVert \mathbf{x_p -x_r} \rVert_2^{2}

where the loss for x_p is defined by

E_p = -\sum_{i=1}^{N} p_ilogq_{p,i}

where N is the number of class, the vector (p_1, p_2, ...p_N ) is the one-hot encoding ground truth, and the vector (q_{p,1}, q_{p,2}, ...q_{p,N} ) is the softmax output of F(x_p) and
the loss for x_r is defined by

E_r = \sum_{i=1}^{N} p_iq_{r,i}

where N is the number of class, the vector (p_1, p_2, ...p_N ) is the one-hot encoding ground truth, and the vector (q_{r,1}, q_{r,2}, ...q_{r,N} ) is the softmax output of F(x_r).

I’m having trouble creating a custom loss function based on these given values.

Does this help?

Thanks for your interest. Actually I am not sure how to get softmax output of DNN for both xp and xr.

Here’s softmax

They already gave it to you, didn’t they? It’s that weighted sum of the two cross entropy losses plus the squared Euclidean distance between x_r and x_p. But I’m guessing you transcribed the E_r formula incorrectly. That makes no sense if the q_{r,i} vector is a softmax output. It should look analogous to the E_p formula you showed above.

And as Balaji says if you’re using TF, just use the categorical cross entropy loss for the first two terms of the weighted sum: that is the implementation of the E_p formula that you show.

Update: actually maybe what is going on here is more subtle than my assumptions above (I have not tried to find and read the paper). There’s a critical missing minus sign on that E_p formula, right? The cross entropy loss terms are -p * log(q), because the log of a number between 0 and 1 is negative.

Right, I edited the minus sign on E_p. I have checked E_r and it is same as in the article and it is not already defined as cross entropy.
In this paper, x_p image is obtained from x_r, that is, by adding perturbations to the raw input.
The framework in paper is as following:


and therefore loss function is defined as above. If the transformed input is fed into the DNN, accuracy is high, When raw data is fed into DNN, the accuracy value is low.

Interesting. Thanks for supplying the diagram. Ok, I think I can almost see what they are doing there. Of course for any given sample, only one of the elements of the p ground truth will be one and the rest zero. So what that E_r function will produce is just the softmax output of the F function for the real true answer. Because the purpose of the F function when used by itself with the unmodified input is to “camoflage” the real answer, you want to penalize it if the q_r value is larger (closer to 1) and reward it for being closer to 0 or at least for being further away from 1, which is what that E_r function will do, although in a sort of “mild” linear way. If you wanted to make the loss punishment more extreme, you could consider using sort of the reverse of cross entropy by defining that version of the loss to be:

E_r = - \displaystyle \sum_{i = 1}^N p_i log(1 - q_{r,i})

In other words to drive the answer strongly towards zero. But maybe that then would become sort of reverse camoflage: the pirate could take the worst answer and assume that’s the real answer, if they were clever enough to notice the pattern. So presumably the authors of the paper did some experimentation and found that their version of E_r works well, with suitably chosen values for the weighting factors \alpha, \beta and \gamma.

But then the point is that if you want to actually implement this, you’ll need to manually implement that function for E_r, since I don’t think that is one of the standard loss functions “on offer” from TF/Keras. Fortunately it looks pretty simple to implement. It’s basically one line of vectorized code.

Thank you so much. I will try again based on your suggestions.