This page reproduces evaluation results from the literature for the basic components that the system is based on (IXA Pipes as an NLP pipeline, YaTeA as a keyphrase extractor and ELCO3 for entity linking). Links to the articles (and reference-sets if applicable) are also available below.
We used the English modules.
The toolkit was presented in Agerri et al., 2014, who provide the evaluations we reproduce below.
These are modules that, while not part of the core toolkit, are compatible with it, since they use the same annotation format.
Coreference
CorefGraph performs coreference resolution. It's a Python implementation of Stanford Multi-Pass Sieve system, focusing on the subset of the sieves that accounts for most of the original multi-pass sieve system's performance.
CorefGraph was evaluated on the dev-auto part of Ontonotes 4.0. Theresults reported in Agerri et al. (2014) were the following:
Syntactic and semantic dependencies
ixa-pipe-srl provides a wrapper around mate-tools, which is a toolkit combining several dependency parsers and semantic role labelers. Besides, ixa-pipe-srl enriches SRL annotations with information from the IXA PredicateMatrix (e.g. links to FrameNet and WordNet, besides the PropBank/NomBank information provided by mate-tools).
We are not aware of an evaluation of ixa-pipe-srl. As an indication of its performance, since ixa-pipe-srl exploits mate-tools, we reproduce here evaluation results for the syntactic dependency and semantic role labeling components of mate-tools.
Björkelund et al (2010) provide a different evaluation where the predicates are identified by the system (as opposed to available from the reference set).
The YaTeA tool and its performance was described in Aubin and Hamon, 2006.
Please see that paper for the way the tool was evaluated. Since the tool was not later evaluated in a standard task (e.g. SemEval 2010 Task 5), we cannot report standard results on it.
The tool was evaluated in Ruiz and Poibeau, 2015 with different corpora (AIDA/CoNLL B, IITB etc). A demo applying the tool is here, and described in Ruiz et al 2015. The tool combines the outputs of several public entity linking services (Spotlight, Wikipedia Miner etc.), with a view to obtaining combined results (via a weighted vote) that outperform the individual systems.
The Strong Annotation Match task requires perfect mention segmentation besides linking to the correct KB entity.
The Entity Match task ignores mention segmentation: only the KB link has to be correct
"Strong Annotation Match" is Cornolti et al. 2013 terminology, in the TAC-KBP task this was called "strong_link_match"