Lab Overview CLEF eHealth 2019
CLEF eHealth 2019:
In today’s information overloaded society it is increasingly difficult to retrieve and digest valid and relevant information to make health-centered decisions. Medical content is becoming available electronically in a variety of forms ranging from patient records and medical dossiers, scientific publications and health-related websites to medical-related topics shared across social networks. Laypeople, clinicians and policy-makers need to easily retrieve, and make sense of medical content to support their decision making. Information retrieval systems have been commonly used as a means to access health information available online. However, the reliability, quality, and suitability of the information for the target audience varies greatly while high recall or coverage, that is finding all relevant information about a topic, is often as important as high precision, if not more. Furthermore, the information seekers in the health domain also experience difficulties in expressing their information needs as search queries.
CLEF eHealth aims to bring together researchers working on related information access topics and provide them with datasets to work with and validate the outcomes. This, the 7th year of the lab, offers the following three tasks.
Task 1. Multilingual Information Extraction [*new challenge this year]
Task 2. Technologically Assisted Reviews in Empirical Medicine
Task 3. Consumer Health Search
The lab also offers a student mentoring track. Contact the lab chairs if you are interested in taking part in this track.
The vision for the Lab is two-fold: (1) to develop tasks that potentially impact patient understanding of medical information and (2) to provide the community with an increasingly sophisticated dataset of clinical narrative, enriched with links to evidence-based care guidelines, systematic reviews, and other further information, to advance the state-of-the-art in multilingual information extraction and information retrieval in health care. Furthermore, we aim to support reproducible research by encouraging participants to reflect on methods and practical steps to take to facilitate the replication of their experiments. In particular, we call participants to submit their systems and configuration files, and independent researchers to reproduce the results of the participating teams.