Helping schools make sense of a barrage of standardized test data: Instructional Tools in Educational Measurement and Statistics (ITEMS) for School Personnel
In the current No Child Left Behind era, K-12 teachers and administrators are expected to have a sophisticated understanding of standardized test results, use them to improve instruction, and communicate them to others. Many educators, however, have never had the opportunity to acquire the "assessment literacy" required for these roles. The goal of the ITEMS project, directed by Rebecca Zwick of the University of California Santa Barbara, was to develop and evaluate three Web-based instructional modules in educational measurement and statistics to address this training gap. We created three 25-minute modules: "What's the Score?" (2005), "What Test Scores Do and Don't Tell Us" (2006), and "What's the Difference?" (2007). Overall, 250 K-12 teachers and administrators participated in our research, which demonstrated the effectiveness of the modules in communicating educational measurement and statistics concepts, especially for teacher education students. Our modules are now freely available on our website, http://items.education.ucsb.edu, in low- and high-bandwidth versions, with optional closed captioning. Also posted are supplementary materials, including glossaries, formulas, reference lists, and quizzes corresponding to each module. The provision of this training in a convenient and economical way is intended to assist schools with the successful implementation and interpretation of assessments. Several school districts have let us know they are using the materials, and at least one teacher education program has incorporated them into its curriculum.
Using a Web-based platform to conduct a randomized controlled study
Many professional development tools are never formally evaluated to address their effectiveness as teaching tools. By contrast, evaluation of the ITEMS modules included a randomized controlled study. For each module, we created and pilot-tested a Web-based assessment literacy quiz that was tailored to the content of the module. When the educators who participated in our research logged on to our site, they first completed a background survey and then were randomly assigned, through a computerized "coin flip" to one of two conditions: In one condition, the 25- minute instructional module was viewed before the quiz was administered; in the other, the quiz was administered first. By comparing the average quiz scores from the two conditions, we were able to test the hypothesis that those who viewed the module first were better able to answer the quiz questions. For all three modules, those who saw the module before taking the quiz performed better than those who took the quiz before seeing the module. These results were statistically significant for two of the three modules. Also, particularly for the second module, teacher education students reaped more of an advantage from viewing the module than did school personnel. Results of our analyses will appear in the journal Educational Measurement: Issues and Practice.
How can schools make sense of the barrage of NCLB test results?
Graphic by Cris Hamilton, from "What's the Difference?" created by the ITEMS Project (Rebecca Zwick, University of California, Santa Barbara, Project Director)
Professor Rebecca Zwick: email@example.com
Liz Alix, Project Administrator: firstname.lastname@example.org