This module will introduce foundational concepts for evidence-based and best-practice screening using both standardized screening assessments. Key terminology, publisher protocols, and application to the Get Ready to Grow screening program are explored.
Learning Objectives Include:
Define key psychometric terminology necessary for the use of standardized assessments.
List clinical questions to confirm language status (proficiency).
Discuss strategies to write clinical notes that include evidence from standardized screening tools.
Standardized testing is used to assess (or screen) a specific area(s) of development. All standardized measures have three distinctive qualities:
Objective procedures for administration and scores (i.e., there are clear rules to give the test and score the test)
Quantitative scores (i.e., there are numerical numbers that describe performance on the test)
Psychometric properties (i.e., the statistical foundations of the test allow for increased certainty that the test measures what it says it does (validity) and is consistent (reliability))
Review the following foundational terminology associated with standardized testing.
Assessment – Any systematic method of obtaining information from tests and other sources, used to draw inferences about characteristics of people, objects, or programs. Assessment aids educational decision-making by securing valid and reliable information from multiple sources of evidence (combinations of performance, products, exhibits, discourse, tests, etc.) and integrating this information in order to judge whether students have learned what is expected.
Age-Based Norms – Developed for the purpose of comparing a student’s score with the scores obtained by other students at the same age on the same test. How much a student knows is determined by the student’s standing or rank within the age reference group. For example, a norms table for 12 year-olds would provide information such as the percentage of 12 year-olds (based on a nationally representative sample) who scored at or below each obtainable score value on a particular test. Compare to Grade -Based Norms.
Derived Score – A score to which raw scores are converted by numerical transformation (e.g., conversion of raw scores to standard scores, percentile ranks, grade equivalents, stanines, etc.)
Item – A general term referring to a single statement, question, exercise, problem, or task on a test or evaluative instrument for which the test taker is to select or construct a response, or to perform a task. Includes all the elements of an item as a collective unit: the stem, response options, prompt, stimulus, etc.
Item Analysis – The process of studying examinee responses to single test questions in order to determine the quality of each item with respect to certain characteristics such as item difficulty, item discrimination, and correlation with an external criterion, etc.
Percentile - The score or point in a score distribution at or below which a given percentage of scores fall. For example, if 72 percent of the students score at or below a score of 25 on a given test, then the score of 25 would be considered at the 72nd percentile. Contrast to Percentile Rank and Percent Correct scores.
Percentile Rank (PR) - The percentage of scores in a specified distribution that fall at or below the point of a given score. Percentile Ranks range in value from 1 to 99, and indicate the status or relative standing of an individual within a specified group (e.g., norms group), by indicating the percent of individuals in that group who obtained lower scores. For example, if a student earned a 72nd Percentile Rank in Language, this would mean he or she scored better than 72 percent of the students in a particular norm group who were administered that same test of Language. This also implies that the only 28 percent (100 - 72) of the norm group scored the same or higher than this student. Note however, an individual’s percentile rank can vary depending on which group is used to determine the ranking. A student is simultaneously a member of many groups: classroom, grade, building, school district, state, and nation. Test developers typically publish different sets of percentile ranks to permit schools to make the most relevant comparisons possible.
Protocol - A record of events. A test protocol usually consists of the test record and test scores. Typically, a document or form used for recording the results of an individually administered test.
Psychometric - Pertaining to the quantitative measurement of academic, psychological or mental characteristics such as abilities, aptitudes, knowledge, skills, and traits.
Raw Score (RS) - The first unadjusted score obtained in scoring a test. A Raw Score is usually determined by tallying the number of questions answered correctly or by the sum or combination of the item scores (i.e., points). However, a raw score could also refer to any number directly obtained by the test administration (e.g., raw score derived by formula scoring, amount of time required to perform a task, the number of errors, etc.). In individually administered tests, raw scores could also include points credited for items below the basal. Raw Scores typically have little meaning by themselves. Interpretation of Raw Scores requires additional information such as the number of items on the test, the difficulty of the test items, norm-referenced information (e.g., Percentile Ranks, Grade Equivalents, Stanines, etc.), and/or criterion-referenced information (e.g., cut -scores).
Read and reflect on the following quote from ASHA's Leaders Project (2013):
"Linguistic bias can be bias towards speakers of other languages or dialects, or towards bilingual speakers and results in inaccurate assessment of children from linguistic backgrounds other than Standard American English. As a result, minorities are overrepresented in special education programs (IDEA 2004).
Bias is evident both in the examiner and in testing materials. Bias in the examiner is often caused by a lack of the necessary knowledge and skills required by the American Speech Language Hearing Association (ASHA) to assess children from culturally and/or linguistically diverse backgrounds (ASHA, 2004). Examiners working with speakers of other languages may not have enough fluency in the language or may not be familiar with expectations typical of the child’s culture (see cultural bias). Regarding dialects, some examiners may not be familiar with the dialect of the student and mistake typical constructions for evidence of a language disorder."
The goal of each evaluator/screener/clinician should be to conduct a "LEAST-BIAS ASSESSMENT." Least-bias assessment always begins with the clinician/screener and the process of self-reflection and self-awareness. It is critical that clinicians acknowledge their own norms and biases (explicit and implicit) in order to engage with culturally and linguistically diverse clients in responsive ways.
SUGGESTED: Complete one of the self-assessment tests from Harvard's Project Implicit (LINK HERE). Reflect on the results and how they might influence developmental screenings (with children, families and diverse providers).
Beyond self-awareness clinicians/providers/screeners must demonstrate a cultural and linguistic awareness so that children from diverse cultures and linguistic backgrounds are not mis-identified as having speech or language concerns. It is critical that all screeners differentiate DISORDER from DIFFERENCE. Disordered development includes delays or atypical development. It is critical to remember that variations in expected development may result from cultural or linguistic variations (e.g., bi/multilingual speakers or speakers of a dialect other than Standard American English). Screeners should be prepared to work with children who are learning English as a second language and who speak various dialects of English (e.g., African American English, Black English). The most critical point that screeners must remember when working with linguistically diverse children is that research has documented the unique features of dialects and English language learning; while differences may be noted that may be similar to a DELAY/DISORDER we must not marginalize language variations. Rather, screeners must acknowledge and explain differences as linguistic VARIATION consistent with the child's language traditions. In other words, we should not apply a deficit perspective to linguistically diverse speakers.
SUGGESTED: Review the Bland-Stuart (2005) article about "Difference vs. Disorder" and consider the features of AAE; further note how the author suggests we discuss language variation to avoid deficit-based language.
An easy first step to avoid linguistically bias screening is to take the time to understand a child's early language experiences and language traditions. This can be done by talking with the child's primary caregivers (parents/caregivers and/or providers).
SUGGESTED: Practice asking a colleague or a friend the questions on the "Language Variation Intake" form (LINK HERE).
It is important that screeners note the following screening guidance:
If screening a child who speaks a non-standard dialect, clinicians should refer to linguistic norms in scoring and interpretation.
If screening a child who does not speak English, clinicians should work with an interpreter to administer the screening in the child's primary language.
***If an interpreter is not available the screener should conduct a clinical interview with a primary caregiver (i.e., parent, preschool teacher) using the developmental questions. In the case that the clinical interview is the 'source of the data' for screening, the child should be re-screened in 2-4 months (i.e., a follow-up).
If screening a child who is learning English as a second language but shows basic proficiency, clinicians should consult with the family/providers to understand (and document) differences in language development and use across both languages.
If screening a child who is deaf or hard of hearing, clinicians should consult with the parents/providers to understand the child's communication (i.e., sign language, hearing aids). Screening should be conduced in using a mode of language consistent with the family's communication choice.
Get Ready to Grow uses the Preschool Language Scale - 5 Screening Test (PLS Screener) as the primary screening for a child's language and speech. "The Preschool Language Scales-5 Screening Test for Early Childhood Educators (PLS™-5 Screening Test for Early Childhood Educators) helps screen a broad spectrum of speech and language skills for children. Designed specifically for early childhood specialists, the test uses terminology describing language skills that is familiar to early childhood educators, occupational therapists, and psychologists who screen children birth through age 6:11" (Pearson).
SUGGESTED: Review the PLS-5 Screening Infographic (LINK HERE) from the publisher (Pearson).
The PLS-5 Screener is a standardized measure and therefore all early childhood professionals administering the test must adhere to the GUIDELINES for ADMINISTRATION and SCORING so that the validity and the reliability of the measure are not compromised. Further, all early childhood professionals must understand how to generate a QUANTITATIVE (numerical) score so that the child's performance can be compared to age-level norms. It is the scoring, interpretation, and comparison to developmental norms that allows for high-quality screening and recommendations.
SUGGESTED: Pearson has developed a webinar on the use and administration of the PLS (LINK HERE). It is suggested that you watch the following portions:
Points to Consider (3:33-7:46)
PLS-5 Screening Overview (16:29 - 20:30)
After understanding the foundations of the PLS-5 Screening Test screeners must spend time familiarizing themselves with the various materials. The PLS-5 Screening Kit includes: Manual (Print), Stimulus Book (Picture Book), and Record Forms (English or Spanish) for the following ages: 1 year, 2 years, 3 years, 4 years, 5 years. The stimulus book and the record forms provide the screener with the prompts necessary to complete the PLS-5 Screening test.
SUGGESTED: Review ALL of the PLS-5 Screening Record forms (LINK HERE) and note primary differences between each age range.
Having an understanding of the purpose and materials is an important first step toward reliable and valid screening; however, there is additional clinical skill that screeners will need to become aware of and practice in order to effectively administer communication screening measures. The following are tips to review:
Setting up for Screening and Managing Materials
Bring a pencil (or a few)
Keep materials organized and generally to the side of your dominant hand (e.g., if you're right handed keep extra materials to your right side)
Seat the child to your non-dominant side (e.g., if you're right handed have the child seated to your left)
Consider setting up a seating arrangement at the corner of a table
Before starting the screening test complete ALL demographic information at the TOP of the screening record form (i.e., age, name, site)
Double check the child's chronological age (digital CA calculator here); it's critical that you choose the age range record form consistent with the child's age. If you need to double check chronological age Pearson has a quick "chronological age calculator" that you can easily access from your phone.
NOTE: You will also want to be sure that you have a plan to keep up on each child's data (individual, site, Grow card)
Responding to Children's Answers and Responses
Do not provide ANY specific feedback on the child's performance; you can give general feedback and praise for hard work.
"That was the right answer, we can move on" VS. "You are working really hard!"
State the prompt exactly as written in the Picture Book and/or Record Form; for the screening test to remain reliable and valid you can NOT adjust the wording or the order of the test items.
Administer the test items in the EXACT order on the record form.
You can provide reasonable wait time after giving the stimulus. You can encourage a response, but you can NOT prompt a correct answer. Do not emphasize any particular words in the test items or point in the direction of the correct answer.
Sometimes other aspects of developmental screening are more engaging for children. If you find that the communication portion of the screening is less preferred, consider doing another aspect (e.g., motor) and then returning to the communication screening.
Take data on the child's responses for EACH item as you go along; under no circumstances should you try to remember the child's responses. Never plan to record their responses after they have completed all the items. It is suggested that you 'keep up' with scoring each item and set of items while the child is working through the screening test.
SUGGESTED: Watch the administration of the PLS-5 Screening Test to a child
The PLS-5 Screening Test includes DIRECT ASSESSMENT measures for receptive and expressive language. Further, the screening test provides rating scales for other aspects of communication: speech, fluency, and social communication/pragmatics. To complete a COMPREHENSIVE developmental screening of communication, screeners must consider not only performance on the language measures, but also consider other aspects of the child's communication.
SUGGESTED: Review the PLS-5 Screening Record forms (LINK HERE) and take particular note of the SECOND PAGE of each aged record form. Consider the speech, fluency, and social/pragmatic items considered.
After completing the direct assessment for the language and the checklist items for speech, fluency and pragmatics/social the screener needs to consider (1) the child's present level of performance (as compared to age-level developmental expectations) and (2) cultural or linguistic considerations that may influence clinical decision making. The following are high-level clinical reflection questions to consider after completing all aspects of the PLS-5 Screening Tool:
SPEECH: What amount of the child's speech is understood (intelligibility)? Is this consistent with developmental expectations? Are there sounds the child is NOT making that we would expect for their age range? Is the child consistently omitting syllables or ends of words? Did you note any structural or functional concerns with the child's oral structures (e.g., open mouth posture, excessive drooling, cleft palate, asymmetry in face)?
RECEPTIVE LANGUAGE: What types of questions does the child answer? Does the child seem to follow directions and instructions associated with taking the test and completing the test? Does the child point, use gestures, or verbally respond to other children and the adults?
EXPRESSIVE LANGUAGE: How does the child communicate wants and needs? Does the child tell stories consistent with their age range? How many words does the child use? How many words does the child put together in a sentence?
SOCIAL PRAGMATIC: Does the child communicate for a range of purposes? Does the child establish joint attention in play or conversation? Is there reciprocity (and joy) in interactions with familiar adults and children? What types of gestures and non-verbal communication (e.g, body language and facial expression) does the child use to supplement spoken communication? What does play look like for this child?
FLUENCY: Were any stuttering moments observed or reported? Are there any signs of clinical stuttering (e.g., blocking, prolongations)? Is the child demonstrating fluctuations in fluency across settings or over time? Is the child withdrawing from interactions with others as a result of stuttering? Are there any secondary behaviors noted associated with stuttering moments (i.e., behavioral responses to stuttering)?
SUGGESTED: Choose a case study from Pearson's website (resources tab) and review.
It is important to note, if the direct assessment of language and communication yields inconsistent results the screener has several options to inform clinical decisions:
Conduct supplemental (e.g., play-based, narrative tasks) to try to elicit the specific skills/behaviors not previously observed/documented.
Interview the primary caregiver or provider to determine if the specific skills/behaviors not previously noted are observed to occur in more familiar (or naturalistic) settings.
Recommend a 'follow-up' screening in 2-4 months to check on the child's development and to re-look at the specific skills/behaviors not observed. This follow-up should only be conducted if there is not significant clinical concern.
Accurately capturing screening results and communicating to all stakeholders in a consistent fashion is critical. The data must be complete and accurate in all of the formats for reporting; these formats are as follows:
Individual/Child Summary Form: The child summary form serves as the central place to capture ALL screening results that are entered into the COMET system. The overall results (e.g., on-track, follow-up, refer) must be indicated for EACH area screened. Further, specific aspects of communication development need to be reported (e.g., receptive language, speech, pragmatics). The child summary form provides as area for comment(s). If a follow-up or referral is recommended for the child, the screener should provide a list of specific areas of concern that justified the recommendation. The following are examples of clinical notes for a child 'referred' to CPSE due to concerns with speech-language:
Intelligibility less than 50% in play and classroom contexts, multiple phonological processes noted that impacted overall intelligibility, early developing sounds were often in error.
Child did not establish joint attention with the screener or with peers in the classroom, the child made some vocalizations in response to play and questions, but did not put words together to functionally communicate, providers reported a high level of frustration and behavioral responses when the child's wants and needs were unclear. Overall, the child had a vocabulary of less than 50 words.
Grow Card: The Grow card serves as the primary tool to communicate results to the family. This should include the summary results (i.e., check boxes) as well as a note in the space provided. If there are concerns regarding development and a referral is being made the corresponding referral form should be filled out and attached to the Grow card; the referral should include the same type of clinical detail noted in the child summary form. The following are samples notes for a child with and without a referral:
It was great to meet [CHILD] today. [CHILD] was cooperative and completed all screening measures. All aspects of development screened today were on-track.
It was great to meet [CHILD] today. [CHILD] was interactive and completed all screening measures. Please see the attached referral form for XX (e.g., hearing).
See attached forms for next steps.
Site Summary Report: This is a summary report provided to the on-site partner. The purpose of this is to ensure the on-site partner has record of each of the children screened and their overall results. All results should be consistently reported and clinical notes from the child summary and Grow card should be similarly documented.
As screeners note REFERRAL in any of the speech-language areas a SPEECH SCREENING REPORT should be completed and attached to the GROW CARD (for the family). It is critical that clinical detail and justification are noted when referrals or recommendations are put forth. Referrals to EI (0-3 years) or CPSE (preschool ages, 3-5 years) can address any of the communication domains (e.g., speech, language, social communication/pragmatics, voice, fluency). The SPEECH SCREENING REPORT includes a section to comment on the child's strengths and indicate the areas of communication concern. It is suggested that you briefly comment on the primary areas of concern; these points may be shared with the referring agency when the parent/caregiver requests an evaluation. The following are samples that refer back to the 'red flag' section of this training module:
Accuracy of SPEECH Sounds: "The child was difficult to understand. The following sound errors were noticed: /s, f, t, g, l/. A full evaluation is recommended."
LANGUAGE:
Auditory Comprehension (Receptive Language): "The child was not following simple 1-2 step directions, was not providing on topic answers to common questions, and did not identify common pictures. A full evaluation is recommended."
Verbal Expression (Expressive Language): "The child was noted to use less than 20 words." or "The child is not using gestures of spoken language to communicate; they are reported to be frustrated and have behavior responses to get wants and needs met." or "The child is not putting 3-5 word sentences together."
VOICE: "The child's vocal quality was noted to be different than same aged children; the child's pediatrician/primary care should be consulted. A full evaluation is recommended."
Conversing with Others (SOCIAL/PRAGMATIC): "The child did not establish joint attention...did not respond to their name...had difficulty following the social routine...A full evaluation is recommended.""
FLUENCY: "The child's speech included stuttering moments such as part-word repetitions and sound prolongations. A full evaluation is recommended."
Expected aspects of development serve as the primary justification for these sections, but clinicians may also consider conducting a quick ITEM ANALYSIS. An Item analysis is a process which examines student responses to individual test items (questions) in order to assess the quality of those items and of the test as a whole. It is often helpful to look for patterns of performance and use those patterns (examples) as justification for your recommendation/referral.
As screeners note a need for FOLLOW-UP in one of the communication areas a general note should be included that indicates:
Area(s) that re-screening will address (e.g., speech sounds)
The expected timeline for re-screening (e.g., 3-4 months)
Reference to any parent education tools that are included (e.g., highlighting the tips in the materials provided)