aimswebPlus Data Gives Teachers Rich Insights Into Student Performance
Aimsweb is a data-intensive assessment and reporting tool launched in 2000 to help educators track their students’ progress in subjects such as reading and mathematics. To make the most of the product’s data-rich options, educators receive a variety of complimentary training options, including quick guides and video tutorials with embedded certification quizzes. In fall of 2016, aimsweb will be succeeded by aimswebPlus, a new assessment suite that provides instructors the tools to measure Common Core-aligned math and reading skills while tracking students’ progress toward year-end academic goals. aimswebPlus builds on the foundation of its predecessor and draws upon more than 30 years of published research to inform its design.
Intended outcome 1
Provide instructors with student growth data that can be used to evaluate the effectiveness of reading and math interventions.
Pearson developed national growth norms for each aimsweb measure, which take into account varying growth rates of students with different abilities, allow teachers to compare a student’s growth rate with his or her peers.
Teachers can also use the norms to define and tailor challenging academic goals for each student and to evaluate, on a regular basis, student progress toward the goal.
The Special School District of St. Louis, which specializes in providing public special education to local students, began using aimsweb in 2009. Since introducing the program, the number of third-graders requiring special intervention in reading has dropped by about 40 percent, according to an interview with district employees.
"Teachers are so much more responsive to where the students are and can make mid-course corrections when those students are not making the growth they need to make."
Amber Del Gaiso, a data coach at the school district of St. Louis who helps teachers make sense of aimsweb data
Educators who use aimsweb to measure student progress—especially students with special education requirements and varying language capabilities—know results must be valid, consistent, and fair across all types of students. Pearson's efficacy research will continue to support and demonstrate the ability to achieve this goal.
Intended outcome 2
Produce results that can be interpreted as valid measures of a student’s achievement in reading and math.
Pearson has conducted a number of studies that suggest a student’s aimsweb score is correlated with his or her score on state reading and math tests, which is one way to demonstrate the product’s validity.1
In a study of roughly 1,000 students at each grade level from grades 3 to 8, Pearson found correlations ranging from 0.60 to 0.72 between scores on aimsweb reading assessments and scores on state reading tests in North Carolina and Illinois. Another study of about 700 students at each grade level from grades 3 to 8 found correlations ranging from 0.57 to 0.78 between scores on aimsweb math assessments and state math tests in North Carolina and Illinois. Pearson has also demonstrated that students who do poorly on their aimsweb assessments are unlikely to do well on their state math and reading tests—up to 85 percent of grades 3-8 students who failed their state math tests and up to 80 percent of students who failed their state reading tests were correctly flagged by aimsweb as at-risk. This sort of predictive accuracy allows teachers to develop and implement the proper educational interventions for those students who are at-risk of failing state exams.
1 NCS Pearson, Inc. (2012). aimsweb Technical Manual. Bloomington, MN: NCS Pearson, Inc.
Intended outcome 3
Deliver results that are consistent over different test forms and different scorers.
Pearson has found evidence that aimsweb generates consistent results across alternate forms of the assessment, which is one way to demonstrate the product’s consistency.2
Oral reading scores of a randomly-selected group of students from the aimsweb database—1,000 in each of grades 3 to 8—showed that for each screening period, students performed consistently across 3 alternate passages, with average correlations ranging from 0.93 to 0.95. Furthermore, in a field test of the oral reading fluency passages involving 24 students per grade level at grades 1-7 and 183 grade 8 students, average scores across all alternate passages per grade were very similar. In another study, students from a nationally-representative sample of 6,550 students in grades 2-8 were given alternate forms of an aimsweb math assessment. Results showed that students performed consistently across forms, with average correlations ranging from 0.80 to 0.88. Results also showed that the average scores across all forms per grade were very similar.
In addition, oral reading fluency scores from roughly 60 to 70 students in each grade level at grades 2, 4, 6, and 8 selected from five public schools in Minnesota and Texas were scored independently by two raters. Interrater reliability estimates, in the form of Shrout and Fleiss’ Formula 21, were 0.99 for all grade levels. Similar interrater reliability estimates were observed in two other studies that investigated scorer consistency for the M-CAP and TEN (Clarke & Shinn, 2004) subtests.3
2 NCS Pearson, Inc. (2012). aimsweb Technical Manual. Bloomington, MN: NCS Pearson, Inc.
3 Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the identification and development of early mathematics curriculum-based measurement. School Psychology Review, 33, 234-248.
"Our early experiences with aimswebPlus during beta testing have been very positive...from rostering to report interpretation, aimswebPlus has been well received by teaching staff as a potentially powerful tool to screen and identify students with varying academic needs."
Patrick W. Nolten, Executive Director of assessment, research and evaluation at Indian Prairie Community Unit School.
With the introduction of aimswebPlus in August 2016, Pearson will continue to conduct research that will further demonstrate the effect of the program on learner outcomes. For an overview of these plans, please see the accompanying Impact Evaluation Report below.
In 2013, we announced our efficacy initiative to measure the impact that our products and services have on our learners. We committed to publicly report our findings starting in 2018, and to subject those reports to external audit. We are pleased to release our preliminary reports to share the work we have done so far and what we have planned ahead. The content in these reports reflects the continued refinement of our approach; while our work continues to advance, we are proud to share transparently what we have learned. We anticipate that we will continue to further refine these reports as we approach our 2018 target.