EVALUATING THE QUALITY OF ISLAMIC CIVILIZATION AND ASIAN CIVILIZATIONS EXAMINATION QUESTIONS

  • Ado Abdu Bichi Faculty of Education, Northwest Univeraity, Kano Nigeria
  • Rahimah Embong Research Institute for Islamic Products and Civilization, Universiti Sultan Zainal Abidin, Terengganu, Malaysia

Abstract

Assessment of learning involves determining whether the content and objectives of education have been mastered by administering quality tests. Thus, the quality of the test items used in evaluating students’ achievement should be a major area of concern in teaching and research in the field of education. This paper assesses the quality of Tamadun Islam dan Tamadun Asia (TITAS) or Islamic Civilization and Asian Civilizations Examination Questions by conducting item analysis. The developed one hundred (100) multiple choices questions were administered to N=36 degree students. The data obtained had been analyzed by conducting item analysis in order to determine the item difficulties, item discrimination indices and the distractors analysis. The finding of results indicates that, based on difficulty indices 18(18%) items were “problematic” or “faulty”, 66(66%) of the items have poor discriminating power. Similarly, the result of the distractor analysis showed 59(59%) of the distractors been flawed, having failed to meet the set minimum standards. Based on the findings it could be assumed that the test used has not been validated during the item development processes. It is recommended that, TITAS test items used in measuring students’ achievement should be made to pass through all the processes of standardization and validation by conducting item analysis to ensure its reliability as well as to minimize measurement errors.

Author Biography

Ado Abdu Bichi, Faculty of Education, Northwest Univeraity, Kano Nigeria

Lecturer

Measurement & Evaluation Unit,

Department of Arts & Social Science Education

References

Abiri, J.O.O. (2007). Element OF Evaluation and Measurement Techniques in Education. Ilorin: Library and publication committee University of Ilorin Nigeria.

Adegoke, B. A. (2013). Comparison of Item Statistics of Physics Achievement Test using Classical Test and Item Response Theory Frameworks. Journal of Education and Practice, Vol.4, (22).

Bichi, A. A. (2015). Item Analysis using a Derived Science Achievement Test Data.International Journal of Science and Research (IJSR), Volume 4 Issue 5, 1656-1662

Bichi, A. A.,Embong, R., Mamat, M. & Maiwada, D. A. (2015).Comparison of Classical Test Theory and Item Response Theory: A Review of Empirical Studies. Australian Journal of Basic and Applied Sciences 9(7) pp. 549-566

Ebel, R.L. and Frisbie,D.A. (1991).Essentials of Educational Measurement. 5th Edn., Prentice Hall, Engelwood Cliffs, New Jersey.

Gurski, L. F. (2008). Secondary Teachers’ Assessment and Grading Practices in Inclusive Classrooms. A Thesis Submitted to the College of Graduate Studies and Research in Partial Fulfilment of the Requirements for the Degree of Master of Education, University of Saskatchewan.

Henning, G. (1987). A Guide to Language Testing: Development, Evaluation, Research. Newberry House Publisher, Cambridge Mass.

Kinsey, T. L . (2003). A Comparison of IRT and RASCH Procedures in a Mixed-Item Format Test. Unpublished Doctoral Thesis, University of North Texas.

Klein, S.P., & Hamilton, L.S. (1999).Large-scale testing: Current practices and new directions (IP-182). Santa Monica, CA: RAND

Krishnan, V. (2013).The early child Development Instruments (EDI): An Item Analysis using Classical Test Theory (CTT) on Alberta‟s Data. Early Child Development Mapping (ECMap) Project Alberta, Community- University Partnership (CUP), Faculty of Extension, University of Alberta, Edmonton, Alberta.

Matlock-Hetzel,S. (1997).Basic Concepts in Item and Test Analysis.(Online) available at files.eric.ed.gov/fulltext/ED406441.pdf [accessed on June 24, 2014.

Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill.

Ofo,J. E. (1994). Research Methods and Statistics in Education and Social Sciences.Joja Educational Research and Publishers, Lagos.

Okpala, P.N., Onocha, C. O. and Oyedeji, O. A. (1993).Measurement and Evaluation in Education.Jattu- Uzairue Stirling- Horden Publishers Nigeria.

Olatunji,D.andOwolabi, H. O. (2009). Difficulty and Discrimination of Economics Test Items with Various Option Formats among Secondary Schools in Ilorin, Nigeria.Ilorin Journal of Education, Vol. 28 pp.49-63.

Pande, S.S., Pande, S.R., Parate,V.R., Nikam, A.P., and Agrekar, S.H. (2013). Correlation between Difficulty and Discrimination Indices of Multiple Choice Questions in Formative Exam in Physiology.South East Asian Journal of Medical Education, 7: pp.45-50.

Payne,J. (1982). Contingent Decision Behaviour.Psychological bulletin, 92, Pp.382-402.

Pope,G. (2009). Item analysis analytics part 1: What is Classical Test Theory? (Online) available at http://blog.questionmark.com/item-analysis-analytics-part-1-what-is-classical-test-theory[accessed on July 5, 2014]

Sax, G. (1989).Principles of educational and psychological measurement and evaluation (3rd ed). Wadsworth, Belmont, CA.

Shakil,M. (2008). Assessing Student Performance Using Test Item Analysis and its Relevance to the State Exit Final Exams of MAT0024 Classes - An Action Research Project. A Paper presented on MDC Conference Day, March 6th, 2008 at MDC, Kendall Campus.

Shih, Y. (2010).An Item Analysis of an English Achievement Test Taken by EFL College Students in Taiwan.Online available at https://www.researchgate.net/publication/265025312_An_Item_Analysis_of_an_English_Achievement_Test_Taken_by_EFL_College_Students_in_Taiwan_An_Item_Analysis_of_an_English_Achievement_Test_Taken_by_EFL_College_Students_in_Taiwan accessed on 23rd April, 2015.

Suruchi and Rana, S. R. (2014). Test Item Analysis and Relationship Between Difficulty Level and Discrimination Index of Test Items in an Achievement Test in Biology. Paripex - Indian Journal of Research, Vol. 3(6) 56-58

Tarrant, M., Ware, J., & Mohammed, A. M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: A descriptive analysis. BMC British Medical Education, 9, 40

Thompson,B. and Levitov, J. E. (1985). Using microcomputers to score and evaluate test items. Collegiate Microcomputer, 3, pp.163-168.

Varma, S. (2008).Preliminary item statistics using point-biserial correlation and p-values, (Online) available at http://www.eddata.com/resources/publications/EDS_poi nt_Biserial.pdf[accessed on October 7, 2014]

Zubairi, A. M. and Kassim,N. L. A. (2006).Classical and Rasch analysis of dichotomously scored reading comprehension test items. Malaysian Journal of ELT Research, 2, pp.1-20.

Published
2018-06-20
How to Cite
Bichi, A. A., & Embong, R. (2018). EVALUATING THE QUALITY OF ISLAMIC CIVILIZATION AND ASIAN CIVILIZATIONS EXAMINATION QUESTIONS. Asian People Journal (APJ), 1(1), 93-109. Retrieved from https://journal.unisza.edu.my/apj/index.php/apj/article/view/26
Section
Articles