Here at PLSAS, we have a wonderful tool at our disposal called Fastbridge Learning (FBL). While the data can be powerful, we also understand that it can also be difficult to work though, and we have the added challenge of having implemented it during the pandemic. The Assessment team plus friends wanted to put together some resources so that all of us throughout the system have the same working knowledge of this tool and how to use it. This site is designed to be a self-paced learning platform to support you in your implementation of FBL. If this is newer to you, take your time and take a step away if it becomes overwhelming. If you are deeper into it, review these materials and support a colleague! Let us know what helped and what other supports you need by reaching out to Jennie Zumbusch at jzumbusch@plsas.org. Thanks!
To ensure continuity and joint understanding, we want to present you with beliefs we have about data:
Data rarely gives us an answer. It typically leads us to further questions and reflections.
A single data point never tells the entire story about an individual. Multiple data points should be used to form the complete picture of a learner and their progress.
Data is developmental, not judgemental. It is there for us to use as a reflective tool for improvement and to figure out what IS working.
There are multiple possible explanations for why a data point might be where it is. We want to explore multiple possibilities of the ‘why’ before we draw any conclusions. Be curious!
Standards-Aligned Assessment Examples: Knowledge checks, unit assessments, MCAs. Skills-Based Assessment: FBL suite of Assessments
We know from the science of reading and a large corpus of early literacy research that reading skill development follows a predictable sequence for most students: letter recognition → letter-sound correspondence → word recognition and decoding → reading fluency → reading comprehension.
Similarly, mathematics skills develop through a progression that includes: oral counting → numeral identification → cardinality → computation.
Assessments for K-1: earlyMath and earlyReading
earlyMath
Fall Winter Spring
Assessments for Grades 2+: aReading, AUTOreading, aMath, CBMmath Automaticity
aReading
The student completes 30 multiple-choice questions related to all reading skills, including phonemic awareness, phonics, fluency, vocabulary, and comprehension. aReading is a computer-adaptive test (CAT) and after completing several items at grade level on the first administration, the FastBridge system presents easier or harder items based on the student’s answers to the preceding ones.
AUTOreading
-Grade 4: Encoding, Word Identification, Vocalulary
-Grades 5+: Word Identification, Decoding, Matching Synonyms, Morphology
aMath
The student completes 30 multiple-choice questions related to all math skills, including all domains in the Common Core State Standards for Mathematics. aMath is a computer-adaptive test (CAT) and after completing several items at grade level on the first administration, the FastBridge system presents easier or harder items based on the student’s answers to the preceding ones.
CBMmathAutomaticity
The student completes math fact problems that include addition, subtraction, multiplication, and/or division, depending on the student’s grade.
Step #1
Step #2
Step #3
Step #4
Benchmark scores are shown for both fall and winter in this Group Growth Report. "F" denotes fall scores and "W" denotes winter scores. The bars are color coded to match the benchmark categories located below. This will allow you to easily compare the percentage of students within each benchmark category from fall to winter.
Students who take Fastbridge screening assessments from fall to winter will have growth data. These scores are calculated based on an estimated 'Rate of Improvement' relative to a particular group (e.g., other student in the same grade level nationally). Growth data are reported as percentiles because of these normative references.
Student screening scores are organized by benchmark categories, which include the following national percentile ranges:
High Risk <15th%
Some Risk 16-39th%
Low Risk >39th%
On Track >71st%
Growth scores are divided into four categories based on growth percentile ranges:
Flat Growth <15%
Modest Growth 15%-<40%
Typical Growth 40%-<75%
Aggressive Growth >75%
This graph shows the ideal changes from fall to winter we hope to see that indicate our instruction is working.
Benchmark Scores (left side of the report):
We want to see the pink bars getting shorter from fall to winter. This indicates that fewer students are in risk categories in the winter as compared to the fall.
We want to see the purple bars get taller from fall to winter. This indicates the more students are moving into low risk and college pathway categories at the winter.
Growth Scores (right side of the report):
We want to see that the purple bars (which indicate high growth) are taller than the pink bars (which indicate low growth).
Practice...
Interpret the whole group data from aMath screening for a 5th grade class.
As a team, discuss:
What high-level inferences can you make about student scores from fall to winter?
Was math instruction working for the majority for students in this class? If so, in what ways? If not, how so?
One more practice...
Interpret the whole group data from aReading screening for a 3rd grade class.
As a team, discuss:
What high-level inferences can you make about student scores from fall to winter?
Was reading instruction working for the majority for students in this class? If so, in what ways? If not, how so?
Analyze your own Group Growth reports now.
As a team, discuss:
What high-level inferences can you make about student scores from fall to winter?
Was your instructional strategies working for the majority of the students in your class? If so, in what ways? If not, how so?
What changes/next steps are needed to help all students meet their end of the year goals?
Who do we need to collaborate with to take action towards these next steps?
Click on the plus sign to expand the Individual Student Data.
Fall to Winter Benchmark Scores: The number shown under each column reflect of the score at fall screening and winter screening. The colored squares correlate with the benchmark categories below the student scores.
Winter Growth Score and Percentile: The Growth Score column gives the the average score gained per week from fall to winter. The Growth Percentile reflects the level of growth the student made from the fall to winter. The colors correlate with the Student Growth Percentiles.
Weekly Goal Growth Score and Percentile: These two columns show you how much students will need to grow in order to meet the end of the year goal. The goal score column shows you how much of a score gain the student will need to make each week until the end of the year. The growth percentile column shows you what level of growth a student will need in order to meet the end of the year goal.
Predicted Score: This column shows the end of the year score a student is likely to receive IF the instructional approach continues in the same manner.
End of the Year Goal: This column shows the end of the year goal for the next highest benchmark category. This is set for the lowest score in that category up from where they are currently.
End of the Benchmark Score: This column shows the end of the year low risk benchmark cut score.