Lotte Meteyard




Clinical academic

Speech and Language Therapist

Researching aphasia rehabilitation, with a sideline in statistics

Find me on twitter

My profile on ORCID

My profile on Google Scholar

If you're interested in collaborating, see the Research page

Acquired Brain & Communication Disorders Lab at University of Reading.

Visit our Lab website! Find the lab on twitter

New publications

From informal to formal: the preliminary psychometric evaluation of the short aphasia test for Gulf Arabic speakers (SATG).

Altaib, M. K., Falouda, M., & Meteyard, L. (2020). From informal to formal: the preliminary psychometric evaluation of the short aphasia test for Gulf Arabic speakers (SATG). Aphasiology, 1-19. https://doi.org/10.1080/02687038.2020.1765303

Speech and language therapists in Gulf Arabic countries still rely on informal aphasia and/or translated western-language assessments to assess the language proficiency of people with aphasia. However, these tests are not sensitive to the linguistic and cultural features of the Arabic language, which may lead to inaccurate diagnosis. This paper describes the preliminary development and psychometric evaluation of the short aphasia test for Gulf Arabic speakers (SATG). Three phases determined whether subtests and tasks were culturally and linguistically appropriate for Gulf Arabic populations. The test consists of six sections that assess different language skills: semi-spontaneous speech, auditory comprehension, repetition, naming, automatic speech, recitation, reading and writing. Together, these aim to detect the absence or presence of aphasia and provide a broad classification of aphasia syndrome (fluent and non-fluent). The SATG takes 20 minutes to complete. It was administered to 37 healthy adult controls and 31 people with aphasia post-stroke. The SATG demonstrated good to excellent reliability over time and from one clinician to another. The SATG was found to have face and content validity.


Measures of functional, real-world communication for aphasia: a critical review

Doedens, W. J., & Meteyard, L. (2020). Measures of functional, real-world communication for aphasia: a critical review. Aphasiology, 34(4), 492-514. https://doi.org/10.1080/02687038.2019.1702848

The aim of this article is to identify which existing instrument of functional communication from the aphasia literature best fits with a theoretically founded definition of real-world communication. In the field of aphasiology, there currently is a lack of consensus about the way in which communication should be measured. Underlying this is a fundamental lack of agreement over what real-world communication entails and how it should be defined.

We review the instruments that are currently used to quantify functional, real-world communication in people with aphasia (PWA). Each measure is checked against a newly proposed, comprehensive, theoretical framework of situated language use, which defines communication as (1) interactive, (2) multimodal, and (3) based on context (common ground). The instrument that best fits the theoretical definition of situated language use and allows for the quantification of communicative ability is the Scenario Test. More work is needed to develop an instrument that can quantify communicative ability across different aphasia types and severities.


Best practice guidance for linear mixed-effects models in psychological science

Meteyard, L., & Davies, R. A. (2020). Best practice guidance for linear mixed-effects models in psychological science. Journal of Memory and Language, 112, 104092. DOI 10.1016/j.jml.2020.104092

PDF available from Research Gate

The culmination of 5 years' work, with Rob Davies (Lancaster, UK). The use of Linear Mixed-effects Models (LMMs) is set to dominate statistical analyses in psychological science and may become the default approach to analyzing quantitative data. We examined the diversity in how LMMs are used and applied using two methods – a survey of researchers (n = 163) and a quasi-systematic review of papers using LMMs (n = 400). The survey revealed substantive concerns among psychologists using or planning to use LMMs and an absence of agreed standards. The review of papers complemented the survey. Most worryingly, we found huge variation in how models were reported, making meta-analysis or replication near impossible. Using these data as our departure point, we present a set of best practice guidance focusing on the reporting of LMMs. We review and discuss current best practice approaches, and provide easy to read summaries (in a table and in bullet points), and example tables for reporting model comparisons and results.