A Comparative Study between the Conventional MCQ Scores and MCQ with the CBA Scores at the Standardized Clinical Knowledge Exam for Clinical Medical Students

Mahmood Ghadermarzi, Shahram Yazdani, Arash Pooladi, Fakhrosadat Hosseini



Background and purpose: Partial knowledge is one of the main factors to be considered when dealing with the improvement of the administration of Multiple Choice Questions (MCQ) in testing. Various strategies have been proposed for this factor in the traditional testing environment. Therefore, this study proposed a Confidence Based Assessment (CBA) as a pertinent solution and aims at comparing the effect of the CBA Scoring system with that of the conventional scoring systems (with and without negative score estimation as penalty) on the students’ scores and estimating their partial knowledge on clinical studies.
Methods: This comparative study was conducted using a standardized clinical knowledge exam for 117 clinical students. After two-step training, both the conventional MCQ and CBA examination was given in a single session simultaneously. The exam included 100 questions and the volunteers were requested to complete a questionnaire regarding their attitude and satisfaction on their first experience of the CBA after exam. A new confidence based marking system was selected for the scoring, which was a hybrid of the UCL and MUK2010 systems. The MCQ-Assistant, SPSS and Microsoft office Excel software were used for scoring and data analysis.
Results: The mean age of the volunteers was 27.3±5.47, of whom 43.6% were men and 69.2% were senior medical students. Exam reliability was 0.977. The fit line of the MCQ scores without penalty estimation was R2=0.9816 and Intercept=18.125 or approximately.2 deviation in the low scores. The MCQ scoring with penalty had a fit line approximately parallel to the 45-degree line but on or above it and the CBA scoring fit line was nearer to the 45-degree line, parallel to it and a little below it. These two sets of scores had a significant p value0.037. The response percentage to the CBA is higher (p value=0.0001). The discrimination power of the MCQ and the CBA for the upper and lower 1/3 of the students was not significantly different (p value=0.34). The students’ satisfaction score was high and acceptable to the CBA system and expressed a positive perspective on this system for their examinations.
Conclusions: The CBA method can increase the competencies of the MCQ exams. It was found to have a greater fairness assessment, was an effective examination, an authentic testing method, with precise estimation and higher constructs validity than the conventional MCQ exam. The CBA simulate the reflection for deeper learning among the students. Keywords: STUDENT ASSESSMENT, PARTIAL KNOWLEDGE, MCQ, CONFIDENCE-BASED ASSESSMENT, EXAM SCORING SYSTEM



Full Text:




Lord FM. Formula scoring and number-right scoring. Journal of Educational Measurement. 1975;12:7-22.

Rowley, G. L. and Traub, R. E. Formula scoring, number-right scoring, and test-taking strategy. Journal of Educational Measurement. 1977;14:15-22.

Jackson, R. A. Guessing and test performance. Educational and Psychological Measurement. 1955;15:74-9.

Yen YC, Ho RG, Chen LJ. A Polytomous Computerized-Adaptive Testing that Rewards Partial Knowledge. Frontiers in Artificial Intelligence and Applications; Vol. 151. Proceeding of the 2006 conference on Learning by Effective Utilization of Technologies: Facilitating Intercultural Understanding. 2006, P:629-36.

Diamond J, Evans W. The correction for guessing. Reviews of Education Research. 1973;43:181-91.

Bliss LB. A test of Lord's assumption regarding examinee guessing behavior on multiple-choice tests using elementary school students. Journal of Educational Measurement. 1980;17:147-52.

Alnabhan, M. An empirical investigation of effects of three methods of handling guessing and

risk taking on the psychometric indices of a test. Social Behavior and Personality. 2002;30,645-52.

Burton RF. Misinformation, partial knowledge and guessing in true/false tests. Medical Education. 2002;36:805-11.

Burton, R. F. Multiple-choice and true/false: myths and misapprehensions. Assessment and Evaluation in Higher Education. 2005;30:65-72.

Gafni N, Melamed E. Differential tendencies to guess as a function of gender and lingual-cultural reference group. Studies in Educational Evaluation. 1994;20:309-19.

Gardner-Medwin AR, Gahan M. Formative and summative confidence-based assessment. In Proceedings of the 7th International Computer-Aided Assessment Conference, Loughborough University, UK. 2003;147-55.

Bryan C Clegg K. Innovative assessment in higher education. (Part 3, simulating learning- confidence-based marking: toward deeper learning and better exam, By Gardner Medwin A R). Routledge London. 2006;p:141-9.

Gardner-Medwin AR. Rational and irrational marking schemes. Journal of Physiology. 1999;515P: 48P

Issroff K, Gardner-Medwin AR. Evaluation of confidence assessment within optional coursework. In : Oliver, M. (Ed) Innovation in the Evaluation of Learning Technology, University of North London: London. 1998;pp:169-79.

Hammond EJ, McIndoe AK, Sansome AJ, Spargo PM. Multiple-choice examinations: adopting an evidence-based approach to exam technique. Anaesthesia. 1998;53:1105-8.

Coombs CH, Womer FB. The assessment of partial knowledge. Educational and Psychological Measurement. 1956;16:13-37.

Budescu D, Bar-Hillel M. To guess or not to guess: a decision-theoretic view of formula scoring. Journal of Educational Measurement. 1993;30:277-91.

Abu-Sayf FK., Diamond JJ. Effect of confidence level in multiple-choice test answers on reliability and validity of scores. Journal of Educational Research. 1976;70:62-3.

Hassmen P, Hunt DP. Human self-assessment in multiple-choice testing. Journal of Educational Measurement. 1994;31:149-60.

Cagnone S, Ricci R. Student ability assessment on two IRT models. Metodoloski zvezki. 2005;2:209-18.

DOI: https://doi.org/10.22037/jme.v14i1.8027

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.