Spring 2012 NWEA Results

Post date: Jul 02, 2012 9:27:6 PM

Click to View --> SPRING 2012 NWEA RESULTS

We completed the Spring 2012 NWEA administration in early June.  Many of the results reports look familiar, but some additional charts have been added in an attempt to report on individual student growth (a forthcoming Strategic Plan goal, possibly) as well as show the distribution of student scores.  Here is a page-by-page summary of the results report:

Page 1: The first paragraph describes which students were tested.  We have honed in on this testing plan after about two years of trial and error, and generally feel that our current plan does a good job of balancing the competing needs of getting useful data on student progress without over testing the students, but we are starting to see some test fatigue on the part of our students.  The rest of the narrative on the page as well as the charts shows how all students in the school performed on the Spring 2012 NWEA in comparison to the intervention benchmark and end of 10th grade mean score.  

Each bar shows what percentage of students in the school scored below the intervention benchmark (red), between the intervention benchmark and the proficiency benchmark (yellow), and above the proficiency benchmark (green).#  We have scores from Spring 2010 and 2011 to compare to results from this spring.  It should be noted that the Spring 2011 and Spring 2012 scores represent about 95% of the same students.   

The blue lines on each graph indicate our end of year goal in each of the three tests.  Having the top of the yellow bar go above the blue line would signify meeting our school improvement goals for this year.  As you can see, we did not make much progress towards our goals and even headed backwards in the area of language usage.  I can honestly say that the decline is due to student resistance to the testing, but I feel that the plateauing of scores is a result of ineffective interventions and reflective of a need to continue to work on improving the core instruction we offer in reading, writing, and mathematics to our students.  We are not pleased with these results and are going to spend the summer developing plans to address both the intervention and instruction factors that are contributing to these scores.  We are also going to have to think of how we will also address the student frustration factors associated with the test.

Pages 2 & 3: These pages are new additions.  One component of NWEA that we have never accessed before is individual student growth targets.  NWEA is able to set expected growth targets for each 9th and 10th grade student based on their score the previous spring.  For example, if a student scored 228 on reading in the spring of 2011, the average student would be expected to make 2 points growth over the next year so one could set a “growth target” of 230 for the following spring.  If the student scores lower, they could be said to have made less than a year's growth; a higher score indicates more than a year's growth.  Thinking ahead to possible Strategic Plan goals asking students to “make at least a year's growth in a year's time,” this data might be helpful in setting a baseline.  As you can see, between 54% and 70% of DISHS 9th and 10th grade students met their growth goals (which is pretty good, since the target is based on average growth; an “average” school would expect to see half of its students make that goal and half not meet the goal).

The second new set of charts are box plots intended to show the distribution of scores.  The bar charts on the 1st page show the percentage of each grade above and below certain thresholds, but it does not at all describe how far above and below the entire range of students are.  There is an explanation of box plots at the bottom of page 2, as well as grade level box plots for reading, language usage, and math by grade on page 3 [the yellow line on pg. 3 is the intervention benchmark for each subject).  I feel that these box plots help to show the incredibly wide range of student abilities in each area.  If you were just to take 11th grade math, for example, you would see that 75% of the students are scoring above the end of 10th grade mean (on grade level, essentially), but the minimum score is 220 (5th grade) and the maximum score is 287- waaaaay off the charts!

Pages 4-6: These pages shows the percentage of students in each grade who scored below the intervention benchmark (red), between the intervention benchmark and the proficiency benchmark (yellow), and above the proficiency benchmark (green), basically breaking the chart on the first page down by grade level.  Improvement could be defined as shrinking and eventually eliminating the red part of each bar for 9th and 10th grade and the red and yellow parts of each bar for 11th and 12th grade.  For 9th grade (Class of 2015) and 10th grade (Class of 2014) I was able to add an additional chart showing achievement of annual growth targets for that grade.

As noted above, we are not satisfied with these results.  We are going to refocus on our interventions and continue to take a long, hard look at our core instruction to make sure that we continue to move a greater percentage of students towards grade level proficiency and making at least a year's growth in a year's time.