Since October 2014,  I am senior lecturer in Human Language Technology at the Institute of Linguistics and Language Technology of the University of Malta. In addition to  my main affiliation in Malta I am affiliated with the Institute for Natural Language Processing (IMS), University of Stuttgart, where I was junior professor from 2012 till 2014. I lead a research group (see below) in the framework of the SFB 732 a collaborative research centre that brings computational linguists and linguists together.  I did a post-doc (maître-assistante) at the University of Geneva working in the field of cross-lingual transfer of semantic role labelling as part of the CLASSiC project. I  earned my PhD from the University of Groningen (Department of Humanities Computing), where I worked on automatic lexical acquisition from corpora within the Alfa-Informatica group. I was a visiting academic at the Division of Information and Communication Sciences of Macquarie University, Sydney from January till March 2007. I worked at ISSCO/TIM-ETI (University of Geneva) from 2002 until 2003. I worked in industry for one year at Systran Translation Systems in 2001-2002. Before that I did the M.Phil Computer Speech and Language Processing at the University of Cambridge. The M.Phil has now been renamed into Computer Speech, Text and Internet Technology.

I have been working on the following subjects: cross-lingual natural language processing (nlp), natural language understanding,  textual digital humanities, automatic lexical acquisition, text mining, text processing, (medical) terminology extraction, computational lexicology, question answering, semantic role labelling, probabilistic modelling, cross-lingual annotation transfer.


SFB 732 project D11: A crosslingual approach to the analysis of compound nouns (DFG 2014-2018)
IMS Stuttgart, Germany

This project tries to bridge the gap between computational linguistics and theoretical linguistics by using linguistically-informed models and explicitly testing hypotheses stemming from Linguistics literature. It proposes a compositional approach to noun-noun (N-N) compound analysis with an interdependent three-level model that comprises compound splitting, capturing the meaning of the components and the covert relation that holds between them. Ambiguity is found on all levels, with the highest ambiguity found on the level where the implicit relation is uncovered. The two possible split points in the German compound Kuhlerwartung,  Kuhl-erwartung (‘cool expectation’) vs. Kuhler-wartung  (‘radiator maintenance’) illustrate the ambiguity that arises at the level of compound splitting. The endless list of covert relations that can hold between the constituents of the compound becomes apparent when we look at the following examples: a chocolate cake is a cake made of chocolate, a wedding cake is a cake made for a wedding, and a cupcake is a cake made using a metal cup.  Crosslingual approaches are promising for semantic analysis due to the regular variation found in different languages. For example, whereas English leaves the compound relation covert, in French we find prepositions that correlate with the relation type. Chocolate cake, cake made of chocolate, is translated with gateau au chocolat, whereas wedding cake, cake made for a wedding, is gateau de marriage. We
will use multi-lingual data throughout the project, in analysis and evaluation.
We will work towards a wide-coverage integrational approach, using automatic, knowledge-lean, corpus-based methods.

PhD students:
Stefan Müller
Patrick Ziering

Collaborations with Gianina Iordachioaia (Institute for English Linguistics)

CLASSiC project: Cross-lingual semantic annotation transfer from English to French ( EC FP7, 2008-2011)

In the CLASSiC project  (Computational Learning in Adaptive Systems for Spoken Conversation) we are focusing on semantic role labeling for French and in particular on methods to automatically generate semantic annotations for French. Syntactic annotation is available for French, but no semantic information. Since there is semantic annotation available for English and there are parallel corpora for the language pair English-French, we transfer the semantic annotation from English to French translations using word alignments. Contrary to previous work (Padó and Pitel, TALN 2007; Padó and Lapata, Comp. Ling. 2009; Basili et al. CICLing 2009), we did not use an ontology constructed for the target language. We want to minimize the amount of manual labour and aim for broad coverage annotations. We used the PropBank annotation framework constructed for English to annotate French sentences, after having tested the cross-lingual validity of PropBank (Van der Plas et al., LAW 2010). Because we know that there is a high correlation between syntax and semantics (see also Merlo and Van der Plas, ACL 2009), we leveraged the information contained in the syntactic annotations in a second step. In this step we trained a syntactic-semantic parser on the combination of syntactic annotations and the semantic annotations resulting from transfer.  The automatically generated semantic annotations for French are close to the upper bound from manual annotations (Van der Plas et al., ACL 2011).

Watch a video of the current CLASSiC system.

PhD project: Automatic lexico-semantic acquisition for question answering (NWO IMIX: 2003-2008)

(Promotor: John Nerbonne, co-promotor: Gosse Bouma)

Freedom and liberty share the same meaning. Paris denotes a city, and the word party triggers associations of wine and fun for many. People naturally acquire these lexico-semantic relations such as synonyms, categorised named entities, and associations by using language in their daily life.

For many natural language processing applications, such as question answering, this type of information is essential, e.g. to recognise that a particular meaning can be inferred from different text variants or to compensate for the lack of general world knowledge.

This thesis proposes three methods for using large text corpora to acquire lexico-semantic information automatically: a syntax-based method, a multilingual word-alignment-based method and a proximity-based method. The three methods complement each other in the type of data needed, the way they deal with sparse data and most importantly, in the types of lexico-semantic information they provide. This information is then applied to the Groningen question answering system Joost. Among the different types of lexico-semantic information acquired, categorised named entities, e.g. Paris denotes a city, improved the system the most and this information was obtained with the syntax-based method.

Try our demo's of semantically related words (in Dutch). The complete text of my thesis can be found in here.