ISCAP Proceedings - 2025

Louisville, KY - November 2025



ISCAP Proceedings: Abstract Presentation


Is Generative AI the Death of Multiple-Choice Questions?


Teko Jan Bekkering
Northeastern State University

Abstract
Generative AI has demonstrated remarkable performance on a wide array of standardized tests, often matching or surpassing average human scores. Large language models have been used to take educational exams ranging from Scholastic Assessment Test (SAT), Graduate Record Exam (GRE), and Medical College Admissions Test (MCAT); to professional tests like the Uniform Bar Exam (UBE) and Medical Licensing Exam (USMLE). These models succeed by leveraging vast amounts of training data and strong pattern recognition, allowing them to recognize structured, context-aware answers. In higher education, generative AI has become a powerful but controversial tool. Students are using it in increasingly sophisticated ways to pass traditional assessments and tests, and institutions are struggling to adapt. Many multiple-choice questions are at risk because they rely on textbook knowledge, widely available prep materials, and compromised test banks. AI is also strong in spotting linguistic cues and matching keywords between stem and answer choices. This study uses archival data from past Computer Science classes taught by the researcher. The data consists of scores on multiple-choice tests taught between Fall 2021 and Spring 2025 (8 semesters). The scores of Copilot Pro for each test were calculated by running the original test on Blackboard and comparing these with the original student averages. The results will be presented at the conference, and implications discussed.