Navigating the perception-practice dilemma: An inquiry into assessors’ implementation of oral presentation rubrics

Authors

  • Mahija Veetil Centre for Preparatory Studies, Dhofar University, Oman. https://orcid.org/0009-0000-8995-6399
  • Rashida Iqbal Centre for Preparatory Studies, Dhofar University, Oman.
  • Mohammed Abdulgalil Abugohar Centre for Preparatory Studies, Dhofar University, Oman.

DOI:

https://doi.org/10.24200/jonus.vol9iss2pp267-296

Abstract

Background and Purpose: Assessing oral presentations involves evaluating multiple interconnected skills, including fluency, coherence, syntax, grammar, diction, and task achievement, making it a complex and challenging task. An oral presentation rubric serves as a tool to promote consistency and objectivity in such assessments. This study investigates the application and perceptions of the in-house speaking assessment rubric used by the Foundation Program (FP) English Unit at Dhofar University (DU) across levels 1, 2, and 3. The aim is to identify potential weaknesses and ambiguities in the rubric and provide recommendations to ensure fair, objective, and uniform assessments.

Methodology: This study employed a mixed-methods approach within an exploratory sequential design. Data were collected over three phases: a pre-session task (n=18), hands-on focus group sessions (n=16) to gather qualitative insights into the rubric’s application, and a questionnaire (n=40) to obtain quantitative data on teachers’ perceptions, cross-referenced with implementation practices.

Findings: The findings revealed significant gaps in the current rubric, leading to inconsistencies in grading practices for both formative and summative assessments. Despite assessors expressing confidence in the rubric’s reliability and their level of training, notable discrepancies were observed between their perceptions and actual application of the rubric criteria. These inconsistencies highlight design flaws and ambiguities in the rubric, contributing to variability in assessment outcomes.

Contributions: This study underscores the necessity of regularly refining assessment rubrics to reduce ambiguity and subjective interpretation. It also emphasizes the importance of comprehensive user training to enhance assessment practices, support professional development, establish clearer learning expectations for students, and minimize grading disparities. These findings offer valuable insights for improving the assessment of oral presentations in educational contexts.

Keywords: Rubric, oral presentation, assessment, discrepancy, raters.

Author Biographies

  • Mahija Veetil, Centre for Preparatory Studies, Dhofar University, Oman.

    Ms. Mahija Veetil is a Lecturer of English at Dhofar University, Oman. She has rich expertise in ELT. Her areas of interest include language assessment and ELT pedagogy.

  • Rashida Iqbal, Centre for Preparatory Studies, Dhofar University, Oman.

    Ms. Rashida Iqbal is a Lecturer of English at Dhofar University, Oman. She has rich expertise in ELT. Her areas of interest include classroom management, differentiation and assessment, and action research.

  • Mohammed Abdulgalil Abugohar, Centre for Preparatory Studies, Dhofar University, Oman.

    Dr. Mohammed Abdulgalil Abugohar is an Assistant Professor of Applied Linguistics (PhD). Currently, he is a Lecturer of English at Dhofar University, Oman. Mohammed does research in Applied Linguistics, Language Assessment, TESOL, TEFL, ESL, ESP (EMP), CALL, MALL, Language Technologies, and Teaching & Learning Resources.

References

Ahmadi, A. (2020). Rater dominance in discussion as a resolution method. Taiwan Journal of TESOL, 17(1), 141-165.‏

Almohaimede, A. (2021). Comparison between students' self‐evaluation and faculty members' evaluation in a clinical endodontic course at King Saud University. European Journal of Dental Education, 26(3), 569-576.

Aly, M. M. (2020). Rubrics for EFL oral presentations: A position paper. Research in Language Teaching, 10(10), 141-168.‎

Belboukhaddaoui, I., & Ginkel, S. (2019). Fostering oral presentation skills by the timing of feedback: An exploratory study in virtual reality. Research on Education and Media, 11(1), 25-31.

Christensen, D., Barnes, J., Rees, D., & Calvasina, G. (2011). Improving the oral presentation skills of accounting students: An experiment. Journal of College Teaching & Learning, 2(1), 17-26.

Creswell, J., & Plano Clark, V. (2011). Designing and conducting mixed methods research. Sage.

Davis, L. (2012). Rater expertise in a second language speaking assessment: The influence of training and experience [Unpublished doctoral dissertation]. University of Hawai’i at Manoa.

Duijm, K., Schoonen, R., & Hulstijn, J. H. (2018). Professional and non-professional raters’ responsiveness to fluency and accuracy in L2 speech: An experimental approach. Language Testing, 35(4), 501-527.

Gunderson, J., MacDonald, L., & Gunderson, W. (2021). Integrating rubric-based metacognitive reflection to improve scientific research presentations. Journal of Chemical Education, 98(7), 2272-2278.

Haji Suhaili, W. S., & Haywood, J. (2017). A student-centered approach to ideas generation for projects: Is it a threat to creativity and innovation? Journal of Nusantara Studies, 2(1), 13-26.

Hidri, S. (2018). Assessing spoken language ability: A many-facet Rasch analysis. In S. Hidri (Ed.), Revisiting the assessment of second language abilities: From theory to practice (pp. 23-48). Springer.

Huang, Y., & Gui, M. (2014). Articulating teachers’ expectations afore: Impact of rubrics on Chinese EFL learners’ self-assessment and speaking ability. Journal of Education and Training Studies, 3(3), 126-132.

Mehmet, Ş. A. T. A., & Karakaya, I. (2022). Investigating the impact of rater training on rater errors in the process of assessing writing skill. International Journal of Assessment Tools in Education, 9(2), 492-514.‏

Murillo-Zamorano, L. R., & Montanero, M. (2018). Oral presentations in higher education: A comparison of the impact of peer and teacher feedback. Assessment & Evaluation in Higher Education, 43, 138-150.

Namin, A., Ketron, S., Kaltcheva, V. D., & Winsor, R. D. (2021). Improving student presentation skills using asynchronous video-based projects. Journal of Management Education, 45(6), 987-1010.

Pérez-Torregrosa, A., Gallego-Arrufat, M., & Cebrián-de-la-Serna, M. (2022). Digital rubric-based assessment of oral presentation competence with technological resources for preservice teachers. Estudios Sobre Educación, 43, 177-198.

Polat, M. (2020). A Rasch analysis of rater behaviour in speaking assessment. International Online Journal of Education and Teaching, 7(3), 1126- 1141.

Reddy, Y., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448.

Thala, T., & Sheehan, S. (2022). The processes of rating L2 speaking performance using an analytic rating scale: A qualitative exploration. Language Education & Assessment, 5(1), 34-51.

Ulker, V. (2017). The design and use of speaking assessment rubrics. Journal of Education and Practice, 8(32), 135-141.

Downloads

Published

2024-07-31

How to Cite

Navigating the perception-practice dilemma: An inquiry into assessors’ implementation of oral presentation rubrics. (2024). Journal of Nusantara Studies (JONUS), 9(2), 267-296. https://doi.org/10.24200/jonus.vol9iss2pp267-296