Below, we provide a short bibliography on the topics of the workshop as well as an extended bibliography.
Short Bibliography
Learning with Rejection: [4, 11]
Active Learning: [26, 8]
On-line learning with feedback graphs [21, 1]
Extended Bibliography
[1] N. Alon, N. Cesa-Bianchi, O. Dekel, and T. Koren. Online learning with feedback graphs: Beyond bandits. In JMLR, pages 23–35, 2015.
[2] N. Alon, N. Cesa-Bianchi, C. Gentile, S. Mannor, Y. Mansour, and O. Shamir. Nonstochastic multi-armed bandits with graph-structured feedback. In CoRR, 2014.
[3] N. Alon, N. Cesa-Bianchi, C. Gentile, and Y. Mansour. From bandits to experts: A tale of dom- ination and independence. In NIPS, 2013.
[4] P. Bartlett and M. Wegkamp. Classification with a reject option using a hinge loss. JMLR, 2008.
[5] A. Bounsiar, E. Grall, and P. Beauseroy. Kernel based rejection method for supervised classification. In WASET, 2007.
[6] H. L. Capitaine and C. Frelicot. An optimum class-rejective decision rule and its evaluation. In ICPR, 2010.
[7] S. Caron, B. Kveton, M. Lelarge, and S. Bhagat. Leveraging side obser- vations in stochastic bandits. In UAI, 2012.
[8] K. Chaudhuri and C. Zhang. Beyond disagreement-based agnostic active learning. In NIPS, 2014.
[9] C. Chow. An optimum character recognition system using decision func- tion. IEEE T. C., 1957.
[10] C. Chow. On optimum recognition error and reject trade-off. IEEE T. C., 1970.
[11] C. Cortes, G. DeSalvo, M. Mohri, and U. Syed. Learning with rejection. In ALT, 2016.
[12] B. Dubuisson and M. Masson. Statistical decision rule with incomplete knowledge about classes. In Pattern Recognition, 1993.
[13] R. El-Yaniv and Y. Wiener. Active learning via perfect selective classi- fication. JMLR, 13:255–279, 2012
[14] Y. Freund, Y. Mansour, and R. Schapire. Generalization bounds for averaged classifiers. Ann. Stat., 2004.
[15] G. Fumera and F. Roli. Support vector machines with embedded reject option. In ICPR, 2002.
[16] G. Fumera, F. Roli, and G. Giacinto. Multiple reject thresholds for improving classification reliability. In ICAPR, 2000.
[17] Y. Grandvalet, J. Keshet, A. Rakotomamonjy, and S. Canu. Suppport vector machines with a reject option. In NIPS, 2008.
[18] R. Herbei and M. Wegkamp. Classification with reject option. Can. J. Stat., 2005.
[19] T. Landgrebe, D. Tax, P. Paclik, and R. Duin. Interaction between clas- sification and reject performance for distance-based reject-option classi- fiers. PRL, 2005.
[20] M. Littman, L. Li, and T. Walsh. Knows what it knows: A framework for self-aware learning. In ICML, 2008.
[21] S. Mannor and O. Shamir. From bandits to experts: On the value of side-observations. NIPS, pages 291–307, 2011.
[22] I. Melvin, J. Weston, C. S. Leslie, and W. S. Noble. Combining classi- fiers for improved classification of proteins from sequence or structure. BMCB, 2008.
[23] C. S. Pereira and A. Pires. On optimal reject rules and ROC curves. PRL, 2005.
[24] T. Pietraszek. Optimizing abstaining classifiers using ROC. In ICML, 2005.
[25] A. Sayedi, M. Zadimoghaddam, and A. Blum. Trading off mistakes and dont-know predictions. In NIPS, 2010.
[26] B. Settles. Active learning literature survey. Computer Sciences Tech- nical Report 1648, University of Wisconsin–Madison, 2009.
[27] D. Tax and R. Duin. Growing a multi-class classifier with a reject option. In Pattern Recognition Letters, 2008.
[28] F. Tortorella. An optimal reject rule for binary classifiers. In ICAPR, 2001.
[29] M. Yuan and M. Wegkamp. Classification methods with reject option based on convex risk minimizations. In JMLR, 2010.
[30] M. Yuan and M. Wegkamp. SVMs with a reject option. In Bernoulli, 2011.
[31] C. Zhang and K. Chaudhuri. The extended Littlestone’s dimension for learning with mistakes and abstentions. In COLT, 2016.