1     Introduction

Over the past 10 years, I have focused my investigations on different, but complementary aspects of Computer and Information Science. At the beginning of my Ph.D. studies, after having completed my Master's thesis on logical aspects of Artificial Intelligence, I began a wonderful adventure in the study of human language. Computational Linguistics is often described as either a subfield of Computer Science or as a subfield of Cognitive Science. Computational Linguistics includes both the design of computers algorithms to process natural language data (Natural Language Processing) and the use of the computational models to describe and understand how people engage in communication by using language (Natural Language Understanding).

With the advent of Web3.0, it is becoming apparent that Natural Language Understanding (NLU) is useful in real computer applications such as Question-Answering, Summarization, and Dialogue Systems. "Making sense" of natural language data is understood here as the extraction of useful linguistic knowledge from the application point of view. In this way we can benefit from linguistic studies and theories developed by scholars of language studies over many years and exploit this knowledge in the smart design of NLU systems components.

Human-Computer Interaction with natural language is moving from science fiction to real life (e.g. the semantic web, Web3.0, multimodal interfaces). The biggest computer technology companies such as Microsoft and Google are massively investing in this field since they realize that current computer technology can adequately support the efficient processing of natural language data. Memories can store and allow fast access to linguistic data (lexica, and grammars) and processors can efficiently execute high complexity analysis algorithms. Moreover, the reduced size of computer components allows embedding of language technology in everyday life consumer appliances thus making the use natural language interfaces more ubiquitous.

Computer systems are also be used to observe natural language interaction between humans. During the past 6 years I was involved in two important research projects (IM2 and CALO) where human language technology has been used to build knowledge bases by recording, tracking and understanding human-human interaction within general and specific scenarios such as meetings, conferences or classrooms.

2     Research Plan

In recent years, I have been involved in several projects where language is used as an input/output modality in the interaction with computer systems. My own work has been mostly concerned with higher-level features of language, namely semantics, pragmatics and discourse. Discourse analysis has been studied in literature and sociolinguistic studies and has recently emerged as a critical sub-field within Computational Linguistics.

One of the motivations for considering algorithmic discourse analysis comes from information overload. As personal computers with low cost storage and access to the Internet proliferated, the production and availability of huge number of documents became an increasingly important problem. Information Retrieval provides only a partial solution to information overload because it does not provide methods to assess the relevance of retrieved information with respect to a user's needs beyond the query’s keywords. Moreover, retrieved information is essentially unprocessed and the user must still assess its relevance.

One way to tackle this problem is to condense and highlight retrieved documents so that a user can rapidly grasp their semantic content. Discourse analysis plays a crucial role in information condensation since documents are commonly structured so that the salience of component parts can be easily determined. Discourse analysis also plays an important role in document production since it provides a tool to structure information for effective communication such as the automatic production of summaries [Inderjeet 2001]

2.1 Previous Work

A special kind of discourse model I am focusing on is Argumentation. My interest for argumentation started when I joined the IM2 project. The targeted application in IM2 is recording and understanding meetings. I have been focusing on understanding several group interaction patterns in meetings that typically arise in case of conflict resolution, negotiation and decision-making. By looking at some works on computational dialectics I realized that argumentative models used in structuring computer-mediated discussions could also be used in the analysis of face-to-face discussions and debates.

I thus adopted various argumentative models (among which the well known IBIS model [Kunz and Rittel 1976]) as the underlying semantic models for representing meeting discussion events and their causal, rhetorical and structural relations. With these representation, stored in a database, the user can query about the content, the details of the discussion and also aggregate data to get information over multiple events such as, for instance, retrieving on which topic somebody argued the most.

I also developed a “meeting discussions” ontology based on Frame Semantics [Fillmore 2003] for creating meta-data for content-based indexing of meeting records [Pallotta 2006]. This type of meta-information is useful to access relevant information about discussions, not only by topics, but also by their argumentative structure [Pallotta 2003, Pallotta et al. 2004, Pallotta et al. 2005]. The final goal is to design and build a system for querying and summarizing collections of transcribed meetings. In [Pallotta et al. 2007], we elicited some user requirements for such an application. Among other things, we discovered that a large portion of questions about meetings records is about their argumentative structure.

Automatic summarization of meetings has been approached so far in a so-called “extractive” fashion that is by extracting excerpts of the dialogs and by assembling them into a possibly coherent text [Murray et al 2005.]. This method has severe limitations due to the intrinsic characteristics of the source data: dialogs are not as coherent as a narrative text (such as news or scientific articles) and obtaining coherent summaries from dialog turns is practically impossible using the extractive approach. Moreover, the proposed solutions for extractive summarization of meetings have already reached their qualitative upper bounds [Murray et al. 2008].

2.2 The challenge

The problem that I will address and try to solve is that of abstractive summarization of meetings. An abstract summary describes the essential points of discussions such as issues, proposals, decisions, and action items. A special case of abstract summary for meetings are minutes

Solutions to this problem, apparently harder than the extractive one, require a better understanding of the source data [Hahn et al. 2000], which is only possible through a comprehensive language analysis. This is possible by means of a powerful syntactic analyzer (i.e. a parser) and a mapping procedure of syntactic structures onto semantic and discourse representations. Then it is necessary to “generate” the summary by synthesizing the text from the obtained representations.

Within a close collaboration with Prof. Rodolfo Delmonte from University of Venice, we decided to apply a powerful parser he developed during the last 30 years and that can produce the adequate semantic and discourse representations for meeting content that will allow us the generation of abstractive summaries of the meeting discussions. So far the results obtained in preliminary experiments are quite encouraging.

2.2         Expected Impact in Knowledge Management

Automatic summarization of meetings would be a very innovative tool that would allow enterprises in extracting tacit knowledge from meeting records for its integration into corporate knowledge bases.

For instance, one could ask the system questions like:
  • “why this decision was made”
  • “who rejected the proposal made by X?”
  • “How the decision of doing X impacted the progress of the project Y?”
These kinds of queries are now very hard to answer automatically because one has to consider several information sources at the same time and know about the context of meeting discussions (e.g. the projects, the people, the roles, the duties, the agenda, the corporate policies and strategies). Moreover, it requires a deep understanding of the meeting situations such as the group dynamics, the rules of order adopted, the specific language used, and the culture-specific rituals.

Producing automatic summaries of meeting would benefit the enterprise because:
  • It would turn implicit and tacit information into explicit, actionable knowledge.
  • It would save time because people who did not attend the meeting could have a quick insightful glimpse of what happened during the meeting without having to replay the whole recordings with a meeting replay tool.
  • It is helpful to track the advancement of projects or people’s performance by doing data mining over several meetings and check, for instance, if the action items and the assigned tasks have been accomplished within the deadlines.
In more general terms, this approach to summarization of spoken data would be also beneficial to cope with information overload arising from many audio-visual data on TV channels and on the Internet such as broadcasted news, talk shows, podcast, webinars, etc.


[Fillmore 2003] Fillmore C. Form and Meaning in Language. CSLI Publications n. 121. Stanford, 2003.

[Hahn et al. 2000] U. Hahn and I. Mani. “The Challenges of Automatic Summarization”. IEEE Computer 33(11): pp.29-36, 2000.

[Inderjeet 2001] Inderjeet Mani. Automatic Summarization. John Benjamins. 2001.

[Kunz and Rittel 1976] W. Kunz, and H.W.J. Rittel, 1970. Issues as elements of information systems. Working Paper n. 131, Universität Stuttgart, Institüt für Grundlagen der Planung. May 1979.

[Maybury2000] Interview with Marc Maybury, director of NIST MITRE research laboratory.

[Maybury1997] Marc Maybury editor. 1997. Intelligent Multimedia Information Retrieval. AAAI/MIT Press. 350 pages, ISBN 0-262-63179-2.

[Murray et al. 2005] G. Murray, S. Renals and J. Carletta. “Extractive summarization of meeting recordings”. In proceedings of the 9th European Conference on Speech Communication and Technology , pp. 593-596, 2005.

[Murray et al. 2008] Gabriel Murray, Thomas Kleinbauer, Peter Poller, Steve Renals, Jonathan Kilgour, Tilman Becker: Extrinsic Summarization Evaluation: A Decision Audit Task. MLMI 2008: 349-361.

[Pallotta 2003] Pallotta V., A computational dialectics approach to meeting tracking and understanding, in Giacalone-Ramat, Rigotti, Rocci (eds.), Special issue on "Linguistics and new professions", Materiali Linguistici, Franco Angeli, October 2003.

[Pallotta et al. 2004]. Pallotta V., Lisowska A. and Marchand-Maillet S., Towards Meting Information Systems: Meeting Knowledge Management. Proceeding of ICEIS 2004 international conference, 14-17 April, 2004, Porto, Portugal.

[Pallotta et al. 2005] Pallotta V., Nieksraz J. and Purver M., Collaborative and Argumentative Models of Meeting Discussions. In Proceeding of CMNA-05 international workshop on Computational Models of Natural Arguments (part of IJCAI 2005). July, 30th 2005, Edimburgh, UK.

[Pallotta 2006] Pallotta V., Framing Arguments, In Proceedings of the International Conference on Argumentation, June 2006, Amsterdam, Netherlands.

[Pallotta et al. 2007] Pallotta V., Seretan V. and Ailomaa M. User Requirements Analysis for Meeting Information Retrieval Based on Query Elicitation. In Proceedings of the 45th Annual Conference of the Association of Computational Linguistics, ACL 2007, Prague, 25 - 27 June 2007.

[Waibel et al. 2001] A. Waibel, M. Bett, F. Metze, K. Ries, T. Schaaf, T. Schultz, H. Soltau, H. Yu, K. Zechner. 2001. Advances in Automatic Meeting Record Creation and Access. In "Proceeding of ICASSP 2001", Salt Lake City.