Search filters

Authors
Julie Glanville
Nicole Askin
Jill Boland
Nadia Corp
Mark Engelbert
Cecily Gilbert
Lydia Jones
Vanessa Kitchin
Shae Martinez

We would like to acknowledge the input of past authors: Mick Arber, Hannah Wood and Kath Wright

Last updated:  7 December 2023

What's new in this update

The chapter has been completely reviewed and updated in November 2022. In December 2023 we have added links to collections of search filters and to examples of machine learning classifiers.

What are search filters?

Search filters (sometimes called hedges) are collections of search terms designed to retrieve selections of records from a bibliographic database (1). They may target research using a specific study design (e.g. randomised controlled trial) or topic (e.g. kidney disease) or some other feature of the research question (such as the age of participants). They are usually combined with the results of a subject search using the AND operator. Sometimes filters may be combined with a subject search using NOT to exclude records based on a particular feature (for example to remove animal studies). 

Why would you use a search filter?

When included in a database search strategy, a search filter can reduce the number of records that researchers may need to sift; recent research has shown that this improved search efficiency is a key use of search filters (2). The value of search filters can also lie in the fact that they are generated through research, tested and validated, meaning that searchers can benefit from other people’s investment of time and expertise, particularly for broad or challenging topics. Search filters are not available for all study types.

Key features

Filters are typically designed for one purpose, which may be to maximise sensitivity (or recall) or to maximise precision (and thus reduce the number of irrelevant records that need to be screened or assessed for relevance). Sensitivity is the proportion of relevant records retrieved by the filter and is the most frequently reported performance measure (3). Precision is the proportion of relevant records in the retrieved records and is less frequently reported (3). Specificity is the proportion of irrelevant records successfully not retrieved.Performance measures, such as sensitivity and precision, can be difficult to interpret and compare. Alternative graphical approaches to presenting performance information may assist with making decisions about which filter to select (3).

Filters are usually specific to the databases for which they are designed and the interface through which a database is searched. 

The terms used in a study design filter typically include thesaurus (subject index) headings (e.g. Medical Subject Headings (MeSH) for MEDLINE filters), and text words in title, abstract and author keywords. Filters may also feature other available database-specific indexing options such as subheadings or publication types, or other fields dependent on their usefulness for the filter question such as the author address field or journal name.  

Where can you find search filters?

Search filters of interest to researchers producing technology assessments are incorporated into some database interfaces. For example, they are labelled as Clinical Queries in PubMed (4) and Expert Searches in the Ovid interface (https://tools.ovid.com/ovidtools/expertsearches.html). Often searchers ‘translate’ filters or adapt them to run on different interfaces (2). Translations and adaptations should be undertaken carefully since different interfaces function in different ways, and different databases may have different indexing languages (5).

Study design search filters can also be identified from internet resources such as

Subject search strategies can be found in various collections. A selected list is provided on the ISSG Search Filter Resource website.

Some guidance documents for the conduct of health technology assessments recommend specific filters and others leave the choice to the discretion of the searcher.

Machine learning classifiers can be characterised as search filters and are increasingly available within systematic review software and as standalone tools.  The ISSG Search Filter Resource provides some examples of machine classifiers.

Critical appraisal of filters

Authors who publish filters should clearly describe the methods they used to compile the filters. It is also valuable to have access to critical assessments of filters that are being used in daily practice. Search filter development methods have developed over time to become more objective and rigorous (1, 4).  The quality of a search filter can be appraised using critical appraisal tools (1,6, 7) which assess the focus of the filter, the methods used to create it and the quality of the testing and validation which have been conducted to ensure that it performs to a specific level of sensitivity, precision or specificity.

It is also important to know the date when the filter was created so an assessment can be made as to its currency. The value of a search filter may decrease over time as new terms are added to a database thesaurus or as terminology changes.

Search filters are not quality filters in terms of identifying only high quality research evidence. All records resulting from the use of a search filter will require an assessment of relevance and quality. All search filters and all search strategies are compromises and an assessment of the performance of filters for each technology appraisal is recommended.

Increasing numbers of filters have led to the assessment of the relative performance of different filters to find the same study design and these can be a good starting point for deciding which filter to use (8,9). Performance reviews save time since they survey a range of filters and offer an overview of how filters perform, potentially removing the need for a searcher to read many original filter papers. A systematic review of the performance of a large number of diagnostic test accuracy (DTA) filters has recommended that search filters should not be used as the only method for searching for DTA studies for systematic reviews and technology appraisals (8). The review concludes that the filters risk missing relevant studies and do not offer benefits in terms of enhanced precision. A comparison study (9) of the performance of search filters used to identify economics evaluations concluded that, while highly sensitive filters are available, their precision is low. The performance data provided in this paper can help researchers select the filter that’s most appropriate to their needs. A recent study (10) demonstrated that a search filter with adequate precision and sensitivity was not yet available to identify studies of epidemiology in MEDLINE.

Search filter development

Creating a search filter to identify database records of a specific study design or some other feature requires a "gold standard" reference set that can be used to measure performance. The reference set is usually created by using the relative recall approach (11) or by handsearching (6).

A recent case study (12) describes how a gold standard set was created to support the development of a prognostic filter for studies of oral squamous cell carcinoma in MEDLINE. The methods used are generic and could be applied to both other databases and to other types of research studies. The authors use a flowchart to illustrate the overall process and describe each of the stages.

The authors may have determined the size of their gold standard using a statistical method (13).

The authors may have identified the search terms to test in a filter using a variety of methods, sometimes in combination:

The authors will usually develop and test their filters using a test set, often a subset of the gold standard reference set, and will validate their filters on a separate validation set of relevant records or in a real world collection of relevant records.

There are recommendations on how to report search filter performance (13).

Reference list

(1) Jenkins M. Evaluation of methodological search filters - a review. Health Info Libr J. 2004;21:148-163. [Publication appraisal

(2) Beale S, Duffy S, Glanville J, Lefebvre C, Wright D, McCool R, Varley D, Boachie C, Fraser C, Harbour J et al. Choosing and using methodological search filters: searchers' views. Health Info Libr J. 2014;31(2):133-147. [Publication appraisal

(3) Harbour J, Fraser C, Lefebvre C, Glanville J, Beale S, Boachie C, Duffy S, McCool R, Smith L, Varley D. Reporting methodological search filter performance comparisons: a literature review. Health Info Libr J. 2014;31(3):176-194. [Publication appraisal

(4) Wilczynski NL, Morgan D, Haynes RB; Hedges Team. An overview of the design and methods for retrieving high-quality studies for clinical care. BMC Med Inform Decis Mak. 2005 Jun 21;5:20. [Publication appraisal]

(5) Glanville J, Foxlee R, Wisniewski S, Noel-Storr A, Edwards M and Dooley G. Translating the Cochrane EMBASE RCT filter from the Ovid interface to Embase.com: a case study. Health Info Libr J, 2019;36: 264-277. 

(6) Glanville J , Bayliss S, Booth A, Dundar Y, Fernandes H, Fleeman ND, Foster L, Fraser C, Fry-Smith A, Golder S, Lefebvre C, Miller C, Paisley S, Payne L, Price A, Welch K. So many filters, so little time: The development of a Search Filter Appraisal Checklist. J Med Libr Assoc. 2008; 96(4): 356-361. [Publication appraisal]

(7) Bak G, Mierzwinski-Urban M, Fitzsimmons H, Morrison A, Maden-Jenkins M. A pragmatic critical appraisal instrument for search filters: introducing the CADTH CAI. Health Info Libr J. 2009;26(3):211-219. [Publication appraisal

(8) Beynon R, Leeflang MM, McDonald S, Eisinga A, Mitchell RL, Whiting P, Glanville JM. Search strategies to identify diagnostic accuracy studies in MEDLINE and EMBASE. Cochrane Database Syst Rev 2013, Issue 9. [Publication appraisal

(9) Glanville J, Kaunelis D, Mensinkai S. How well do search filters perform in identifying economic evaluations in MEDLINE and EMBASE. Int J Technol Assess Health Care 2009;25(4):522-529. [Publication appraisal

(10) Waffenschmidt S, Hermanns T, Gerber-Grote A, Mostardt S. No suitable precise or optimized epidemiologic search filters were available for bibliographic databases. J Clin Epidemiol. 2016;82:112-18.  [Publication appraisal]

(11) Sampson M, Zhang L, Morrison A, Barrowman NJ, Clifford TJ, Platt RW, et al. An alternative to the hand searching gold standard: validating methodological search filters using relative recall. BMC Med Res Methodol. 2006;6(33) [Publication appraisal

(12) Frazier JJ, Stein CD, Tseytlin E, Bekhuis T. Building a gold standard to construct search filters: a case study with biomarkers for oral cancer. J Med Libr Assoc. 2015;103(1):22-30. [Publication appraisal]


(13) Lefebvre C, Glanville J, Beale S, Boachie C, Duffy S, Fraser C, et al. Assessing the performance of methodological search filters to improve the efficiency of evidence information retrieval: five literature reviews and a qualitative study. Health Technol Assess 2017;21(69).



How to cite this chapter:

Glanville J, Askin N, Boland J, Corp N, Engelbert M, Gilbert C, Jones L, Kitchin V, Martinez S. Search filters.  Last updated 7 December 2023. In: SuRe Info: Summarized Research in Information Retrieval for HTA. Available from: https://www.sure-info.org//search-filters

Copyright: the authors