LEARNER-DRIVEN ORAL ASSESSMENT CRITERIA FOR ENGLISH PRESENTATION

Mardiana binti Idris, Abdul Halim bin Abdul Raof

Abstract

Learner-centred assessment has been widely propagated in learner-centred approach. However, learners are rarely given the opportunity to engineer their own assessment. Therefore, this study attempted to gauge (1) the functionality of learner-driven oral assessment criteria scaling structure and (2) the reliability of learner-assessors in applying their own assessment criteria during oral presentation. In this study, 11 participants from an electrical engineering group, which consists of one year programme matriculation students, participated in assessment criteria development. First, participants discussed suitable criteria and scaling structure in small groups. Secondly, each group presented their oral assessment criteria for peer feedback. Thirdly, participants discussed and finalised the oral assessment criteria for the class. Fourthly, to test the learner-driven assessment criteria, three speakers from the group volunteered to present their speech. While presenting, these speakers were assessed by their peers. Participants’ ratings and scores were later analysed using the Many-Facet Rasch Measurement (MFRM) software. Findings show that despite the criteria being developed by learners, the scaling structures were functioning usefully with the Rasch Threshold measure indicated more than 1.4 logits between assessment levels and the learner-assessor reliability was > 0.80. The significance of this study lies in raising awareness for improving learners’ oral presentation skills as well as developing learner autonomy.

 

Keywords: Learner autonomy, learner-centred, oral skills, Rasch measurement.

 

Cite as: Idris, M. & Abdul Raof, A. H. (2019). Learner-driven oral assessment criteria for English presentation. Journal of Nusantara Studies, 4(1), 365-383. http://dx.doi.org/10.24200/jonus.vol4iss1pp365-383

Full Text:

PDF

References

Aslan, S. & Reigeluth, C. (2015). Examining the challenges of learner-centered education. Phi Delta Kappan, 97(4), 63-68.

Boud, D., Lawson, R., & Thompson, D. G. (2013). Does student engagement in self-assessment calibrate their judgement over time? Assessment & Evaluation in Higher Education, 28(8), 941-956.

Bordin Chinda, M. A. (2009). Professional development in language testing and assessment: A case study of supporting change in assessment practice in in-service EFL teachers in Thailand. (Unpublished doctoral dissertation). University of Nottingham, United Kindom.

Earl, L. (2013). Assessment as learning. Thousand Oaks, CA: Corwin Press.

Gardner, J. (2012). Assessment and learning (Second Edition). London: SAGE Publications Ltd.

Gielen, S., Dochy, F., Onghena, P., Struyven, K., & Smeets, S. (2011). Goals of peer assessment and their associated quality concepts. Studies in Higher Education 36(6), 719–735.

Gikandi, J. (2011). Achieving meaningful online learning through effective formative assessment. In G. Williams, P. Statham, N. Brown, B. Cleland (Eds.), Changing demands, changing directions (pp.452-454). Tasmania, Australia: Proceedings Ascilite Hobart.

Goulden, N. R. (1994). Relationship of analytic and holistic methods to raters' scores for speeches. The Journal of Research and Development in Education, 27(1), 73-82.

Graue, M. E. (1993). Integrating theory and practice through instructional assessment. Educational Assessment, 1(4), 283-309.

Hamidi, E. (2010). Fundamental issues in L2 classroom assessment practices. Academic Leadership: The Online Journal, 8(2), 1–17.

Keppell, M. & Carless, D. (2006). Learning-oriented assessment: A technology-based case study. Assessment in Education, 13(2), 179–191.

Lim, H. (2007). A study of self- and peer assessments of learners’ oral proficiency. Camling, 1(1), 169-176.

Linacre, J. M. (2004). Test validity and Rasch measurement: Construct, content, etc. Rasch Measurement Transactions, 18(1), 970-971.

Little, D. (2005). The common European framework and the European language portfolio: Involving learners and their judgements in the assessment process. Language Testing, 22(3), 321–336.

Luoma, S. (2004). Assessing speaking. Cambridge, U.K.: Cambridge University Press.

Nakatsuhara, F. (2007). Developing a rating scale to assess English speaking skills of Japanese upper-secondary students. Essex Graduate Student Papers in Language & Linguistics, 9(1), 83–103.

Nunan, D. (1997). Does learner strategy training make a difference? Lenguas Modernas, 24(1), 123-142.

Reinders, H. (2010). Towards a classroom pedagogy for learner autonomy: A framework of independent language learning skills. Australian Journal of Teacher Education, 35(5), 40-55.

Sato, T. (2012). The contribution of test-takers’ speech content to scores on an English oral proficiency test. Language Testing, 29(2), 223–241.

Shepard, A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14.

Spiller, D. (2012). Assessment matters: Self-assessment and peer assessment. Teaching development unit. New Zealand: The University of Waikato.

Taras, M. (2008). Issues of power and equity in two models of self-assessment. Teaching in Higher Education, 13(1), 81–92.

Vickerman, P. (2009). Student perspectives on formative peer assessment: An attempt to deepen learning? Assessment & Evaluation in Higher Education, 34(2), 221–230.

Weigle, S. C. (2002). Assessing writing. Cambridge, U.K.: Cambridge University Press.

Wright, B. D. & Masters, G. N. (1982). Rating scale analysis: Rasch measurement. Chicago: Mesa Press.

Refbacks

  • There are currently no refbacks.