[Main track] Paper #4047 - The Importance of the Test Set Size in Quantification Assessment
International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence
Main contributions (detailed results):
We empirically demonstrate the importance of the test set size to assess quantifiers
We show that current quantifiers generally have a mediocre performance on the smallest test sets
We propose a meta-learning scheme to select the best quantifier based on the test size that can outperform the best single quantification method
We also built a R Package, including all quantification methods used in our experiments as well as the auxiliary functions. Moreover, all our source code and datasets are available on the gitlab repository link