I like data. There I said it, and I am well aware of how odd that statement might make me sound. But, I do enjoy working with assessment data in regards to schools and student achievement. Accurate assessment data used wisely for this purpose is both exciting and invigorating in the valuable information which can be gleaned from it for better instruction. Can the data sometimes be used in a fashion that misleads? Sure. Can the use of testing data sometimes lead to an overemphasis on testing and “teaching the test”? When mishandled, yes. But, when used correctly, I think accurate and reliable assessment data is the clearest, most objective way to measure student learning, especially when looking at a group of students. Such data can give us valuable insight into what we might not otherwise notice on the individual student level as well. However, there are many ways administrators, lawmakers, members of the public, and teachers, while attempting to use assessment data for noble purposes, might cause inadvertent problems through the misuse of this data.
The assessment data I am referring to here is standards based assessment data given in a somewhat standardized format. Ideally, the tests are robust in nature assessing students at a variety learning levels with an emphasis on higher order thinking skills. Many of the modern assessments have incorporated performance based tasks and essay style questions which are very helpful in identifying the proficiency of students to perform at these higher levels. Most assessments have at this point shifted from paper and pencil based testing to some form of computerized testing. Some of these computerized assessments also have “adaptive” elements which scale up or down pulling from a bank of questions on various levels of knowledge based upon whether a student answers a question correct (which tells the assessment to give a more difficult question next) or answers it incorrect (which tells the assessment to lower the difficulty to zero in on the student’s level or knowledge. Such adaptive assessments are a particular favorite of mine as far as their ability to generate even more accurate measures of where a student is truly performing than traditional, non-adaptive assessments. These traditional assessments have a predetermined set of questions and/or tasks and lack the ability to deviate from this predetermined structure. Thus, with traditional, non-adaptive assessments the data gained is of little value to assess where a student is performing if he or she is significantly below the level of knowledge the test was designed to assess or significantly higher. Regardless of these significant differences, all of these assessments can be referred to as “standardized” tests. “Standardized testing” has almost become a dirty word with many both inside and outside of professional education. However, I feel that this is a result of the mishandling of the data and/or the preparation for the assessment rather than “standardized testing” in and of itself being something bad.
(continued on https://thinkingconservativeblog.wordpress.com/2015/11/21/assessment-data-best-friend-or-the-worst-enemy-of-efficiency-instruction )