Defined the loss, now we’ll have sicuro compute its gradient respect puro the output neurons of the CNN durante order onesto backpropagate it through the net and optimize the defined loss function tuning the net parameters. The loss terms coming from the negative classes are zero. However, the loss gradient respect those negative classes is not cancelled, since the Softmax of the positive class also depends on the negative classes scores.
The gradient expression will be the same for all \(C\) except for the ground truth class \(C_p\), because the conteggio of \(C_p\) (\(s_p\)) is durante the nominator.
- Caffe: SoftmaxWithLoss Layer. Is limited preciso multi-class classification.
- Pytorch: CrossEntropyLoss. Is limited onesto multi-class classification.
- TensorFlow: softmax_cross_entropy. Is limited sicuro multi-class prezzo amor en linea classification.
Durante this Facebook rete informatica they claim that, despite being counter-intuitive, Categorical Ciclocross-Entropy loss, or Softmax loss worked better than Binary Ciclocampestre-Entropy loss con their multi-label classification problem.
> Skip this part if you are not interested per Facebook or me using Softmax Loss for multi-label classification, which is not canone.
When Softmax loss is used is a multi-label campo, the gradients get a bit more complex, since the loss contains an element for each positive class. Consider \(M\) are the positive classes of a sample. The CE Loss with Softmax activations would be:
Where each \(s_p\) con \(M\) is the CNN score for each positive class. As durante Facebook paper, I introduce per scaling factor \(1/M\) preciso make the loss invariant sicuro the number of positive classes, which ple.
As Caffe Softmax with Loss layer nor Multinomial Logistic Loss Layer accept multi-label targets, I implemented my own PyCaffe Softmax loss layer, following the specifications of the Facebook paper. Caffe python layers let’s us easily customize the operations done durante the forward and backward passes of the layer:
Forward pass: Loss computation
We first compute Softmax activations for each class and panneau them durante probs. Then we compute the loss for each image sopra the batch considering there might be more than one positive label. We use an scale_factor (\(M\)) and we also multiply losses by the labels, which can be binary or real numbers, so they can be used for instance sicuro introduce class balancing. The batch loss will be the mean loss of the elements in the batch. We then save the tempo_loss onesto video it and the probs to use them in the backward pass.
Backward pass: Gradients computation
Per the backward pass we need onesto compute the gradients of each element of the batch respect onesto each one of the classes scores \(s\). As the gradient for all the classes \(C\) except positive classes \(M\) is equal preciso probs, we assign probs values sicuro sbocco. For the positive classes per \(M\) we subtract 1 sicuro the corresponding probs value and use scale_factor puro scontro the gradient expression. We compute the mean gradients of all the batch to run the backpropagation.
Binary Cross-Entropy Loss
Also called Sigmoid Cross-Entropy loss. It is verso Sigmoid activation plus per Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every CNN output vector component is not affected by other component values. That’s why it is used for multi-label classification, were the insight of an element belonging sicuro a excretion class should not influence the decision for another class. It’s called Binary Cross-Entropy Loss because it sets up a binary classification problem between \(C’ = 2\) classes for every class con \(C\), as explained above. So when using this Loss, the formulation of Ciclocampestre Entroypy Loss for binary problems is often used: