Loss Functions for Top-k Error: Analysis and Insights

@inproceedings{lapin2016cvpr,
  title = {Loss Functions for Top-k Error: Analysis and Insights},
  author = {Maksim Lapin and Matthias Hein and Bernt Schiele},
  booktitle = {CVPR},
  year = {2016}
}

Empirical and Theoretical Evaluation of 10 Loss Functions on 11+ Datasets

  • We study top-k error optimization on a diverse range of learning tasks.
  • We consider 6 existing methods and propose 4 novel loss functions for minimizing the top-k error.
  • We develop an optimization scheme based on SDCA, which can be used with the softmax loss.
  • All methods are evaluated empirically and, whenever possible, in terms of classification calibration.
  • We discover that the softmax loss and the proposed smooth top-1 SVM are competitive in all top-k errors.
  • Further small improvements can be obtained with the new top-k losses.
  • This is a follow-up on the Top-k Multiclass SVM.