My experience as a teacher only stretches across three years but during those three years, I have taught Social Studies in various capacities in five different school districts that were, in many ways, considerably different from one another. Of course, these school districts shared in common many characteristics and practices too. One of those similarities is the use of assessments composed largely of multiple-choice (MC) test items with a few short written responses. Some of the MC items task students to evaluate and analyze primary or secondary source text or interpret graphs, charts, or maps.
If you are reading this blog post there is little doubt you have taken such an assessment. As you no doubt know, an MC test item starts with a question followed by four to five answers one of which is the correct answer and the remaining possible answers– which are referred to as distractors– are incorrect. In a Social Studies context, the main purpose of an MC question is to measure how well a student has mastered basic fundamental facts such as the definition of terms, people, and places. Additionally, MC questions are used to assess reading comprehension and comparison and analysis of primary and secondary sources. When assessing the latter, MC questions are exceptionally limited since MC questions are, by nature, close-ended.
With all the emphasis there has been on critical thinking, critical reading, and writing in state standards, especially with the introduction of the Common Core, why has the use of MC test items persisted to be used so pervasively in Social Studies contexts? Why do social studies teachers continue to use a form of assessment which is exceedingly limited in its ability to demonstrate a student’s depth of knowledge? The answer to these questions reveals an education system which remains institutionally anchored to antiquated concepts of student assessment.
Why Teachers and School Districts Use Multiple Choice
Since Michigan relies on standardized tests such as M-Step and SAT to assess the level of teacher and school district effectiveness, teachers and districts have responded by formulating assessments that emulate such tests. As a result, districts and their Social Studies departments create assessments that will give students the greatest chance of achieving growth as measured by standardized tests rather than the many skills embedded in the state standards.
The purpose for my exhaustive explanation of how standardized tests have resulted in semester exams designed with a traditional model of MC test items with a few written responses is to lay out how such assessments are actually used. Teachers and school districts are responding to how they will actually be evaluated rather than standards that are not reflected in the standardized tests.
The other incentive for teachers to rely on MC based assessments and exams is that they are much easier to grade. Data from MC test items can be gathered very efficiently thus providing teachers some measure of relief from a burdensome workload.
MC based assessments possess severe limitations in providing information that is useful in informing instructional design. Using data from MC based assessments can only lead to instruction designed to help students improve their ability to take MC tests.
Assessing the Value of Multiple Choice
My first two questions for my Assessment Design Checklist attempt to focus attention on creating authentic assessments:
- Does my assessment task students to express how they have made meaning of the knowledge and concepts?
- Does my assessment task students to apply thinking skills in a manner where they must transfer their knowledge of concepts to new and/or unfamiliar contexts?
Applying these questions to a MC based assessment leads me to one inescapable conclusion: MC test items cannot remotely engage students in expressing how they have made meaning of what they have been taught nor can it be said with much confidence that MC test items measure how well a student can transfer their knowledge of concepts to different contexts.
As Bob James observes in regards his own assessment design process:
Even though this approach to assessment makes grading and justifying the grades fairly easy, I have always felt a bit uneasy that these assessments don’t reflect the point of the unit and that the project grade sometimes has less to do with the key ideas and more to do with effort. I think I tend to test what is easy to test instead of assessing for my deeper goals.
Wiggins asserts that multiple choice tests are inauthentic, that is, they fail to task students to perform a task that is authentic to the actual tasks and skills they were taught and therefore a poor measure of what the student actually learned. How, then, should educators approach the seemingly entrenched problem of MC based assessments?
Pushing Multiple Choice Toward Authentic Assessment
As noted above, there is little room in the world of Understanding by Design for MC test items. Due to irresistible institutional forces, however, many Social Studies educators have little choice but to include such test items in their assessments. What can an educator do to push MC test items toward a more authentic assessment of how a student has made meaning from the targeted objective? Can MC test items provide the teacher with a reasonable avenue to provide feedback to their students?
The advent of digital assessment platforms has expanded the possibilities of how an educator can use MC test items to measure, to some degree, how the student made meaning from instruction. For example, Google Forms gives the assessment designer to help students with how they make meaning by basing the order in which they take the assessment upon how they answer select MC questions. If a student makes a selection that indicates a lack of understanding or comprehension of a particular concept, then they will be pushed to another test item designed to assist them in forming a firmer grasp of the targeted concept. If the student indicates mastery of the concept then they can be pushed to more sophisticated test items depending on their responses.