Assessing Language Reasoning Skills of Indonesian Students Using Computerized Adaptive Testing
Abstract
Language, as the fundamental means of communication, represents the symbolization of thoughts conveyed to others. Human understanding of the structure of messages relies on the speaker's language reasoning ability. This ability can be measured from the simplest to the most complex stimuli. Traditionally, assessing language reasoning has been done through written tests, which require extensive preparation and are time-consuming. This study proposes a model for measuring language reasoning ability using a Computerized Adaptive Test (CAT). The CAT adjusts the difficulty of questions in real time based on the participant's responses. If a participant answers correctly, the system presents a more challenging question. Conversely, the system selects an easier question if the participant answers incorrectly. This adaptive approach ensures a tailored and efficient assessment experience, accurately measuring the participant's abilities. The research began by developing a valid and reliable language reasoning test instrument and its quadrant class. This included determining the starting, jumping, and stopping points, culminating in the CAT design. The results of the CAT proposed in this study can map basic language reasoning skills, starting from understanding the concept of facts, applying linguistic rules according to the agreement formed in the read clauses, breaking down information into more specific forms, judging the values of ideas, combining word selection, ideas formation, and context, analogy thinking, and comparative thinking. The analysis revealed that the participants' dominant ability was in comparative thinking, which involves comparing language forms, conditions, settings, and messages in written discourse. Moreover, the CAT system proposed in this study was proven to speed up the testing process while enabling students to complete the tests according to their abilities.
Keywords
Full Text:
PDFReferences
Aiken, L. R. (1985). Three coefficients for analyzing the reliability and validity of ratings. Educational and Psychological Measurement, 45(1), 131–142. https://doi.org/10.1177/0013164485451012
Brown, T. A. (2015). Confirmatory factor analysis for applied research. Guilford Publications.
Brown, T. A., & Moore, M. T. (2012). Confirmatory factor analysis. Handbook of Structural Equation Modeling, pp. 361–379. https://psycnet.apa.org/record/2012-16551-022
Clark, S. S. (2007). Thinking locally, suing globally: The international frontiers of mass tort litigation in Australia. Defense Counsel Journal, 74, 139.
Dove, G. (2014). Thinking in words: Language as an embodied medium of thought. Topics in Cognitive Science, 6(3), 371–389. https://doi.org/10.1111/tops.12102
Furr, R. M. (2011). Evaluating psychometric properties: Dimensionality and reliability. Scale Construction and Psychometrics for Social and Personality Psychology, pp. 25–51. https://dx.doi.org/10.4135/9781446287866.n4
Green, B. F., Bock, R. D., Humphreys, L. G., Linn, R. L., & Reckase, M. D. (1984). Technical guidelines for assessing computerized adaptive tests. Journal of Educational Measurement, 21(4), 347–360. https://doi.org/10.1111/j.1745-3984.1984.tb01039.x
Greenspan, S. I., & Shanker, S. (2009). The first idea: How symbols, language, and intelligence evolved from our primate ancestors to modern humans. Da Capo Press. https://doi.org/10.1093/brain/awh564
Hambleton, R. K., & Swaminathan, H. (1991). Item response theory (Principles and applications) (p. 107). Springer Science+Business Media. https://www.springer.com/gp/book/9780898380651
Istiyono, E., Dwandaru, W. S. B., Setiawan, R., & Megawati, I. (2020). Developing of computerized adaptive testing to measure physics higher order thinking skills of senior high school students and its feasibility of use. European Journal of Educational Research, 9(1), 91–101. https://doi.org/10.12973/eu-jer.9.1.91
Kavanagh, J. M., & Szweda, C. (2017). A crisis in competency: The strategic and ethical imperative to assessing new graduate nurses’ clinical reasoning. Nursing Education Perspectives, 38(2), 57–62. https://doi.org/10.1097/01.NEP.0000000000000112
Kingsbury, G. G., & Zara, A. R. (1989). Procedures for selecting items for computerized adaptive tests. Applied Measurement in Education, 2(4), 359–375. https://doi.org/10.1207/s15324818ame0204_6
Lane, S., Raymond, M. R., & Haladyna, T. M. (2015). Handbook of test development. Routledge. https://doi.org/10.4324/9780203102961
Larson, J. W., & Madsen, H. S. (2013). Computerized adaptive language testing: Moving beyond computer-assisted testing. Calico Journal, 2(3), 32–37. https://eric.ed.gov/?id=EJ340118
Leighton, J. P. (2006). Teaching and assessing deductive reasoning skills. The Journal of Experimental Education, 74(2), 107–136. https://eric.ed.gov/?id=EJ340118
Magis, D. (2013). A note on the item information function of the four-parameter logistic model. Applied Psychological Measurement, 37(4), 304–315. https://doi.org/10.1177/0146621613475471
Perera, C. J., Sumintono, B., & Jiang, N. (2018). The psychometric validation of the Principal Practices Questionnaire based on item response theory. International Online Journal of Educational Leadership, 2(1), 21–38. https://doi.org/10.22452/iojel.vol2no1.3
Ramadhan, S., Sumiharsono, R., Mardapi, D., & Prasetyo, Z. K. (2020). The quality of test instruments constructed by teachers in Bima Regency, Indonesia: Document analysis. International Journal of Instruction, 13(2), 507–518. https://doi.org/10.29333/iji.2020.13235a
Retnawati, H. (2015). The comparison of accuracy scores on the paper and pencil testing vs. computer-based testing. Turkish Online Journal of Educational Technology-TOJET, 14(4), 135–142. https://eric.ed.gov/?id=EJ1077660
Retnawati, H., Hadi, S., Nugraha, A. C., Arlinwibowo, J., Sulistyaningsih, E., Djidu, H., & Apino, E. (2017). Implementing the computer-based national examination in Indonesian schools: The challenges and strategies. Problems of Education in the 21st Century, 75(6), 612. https://doi.org/10.33225/pec/17.75.612
Rohmadi, M., Ulya, C., Sudaryanto, M., & Ximenes, M. (2020). Feasibility analysis of basic writing and reading materials for foreign speakers. Proceedings of the 2nd Konferensi BIPA Tahunan by Postgraduate Program of Javanese Literature and Language Education in Collaboration with Association of Indonesian Language and Literature Lecturers, Surakarta: 9 November 2019. https://doi.org/10.4108/eai.9-11-2019.2295063
Ryle, A. (2012). Critique of CBT and CAT. Change for the Better, pp. 4, 1–8. https://in.sagepub.com/sites/default/files/upm-binaries/47043_07_Critique_of_CBTand_CAT.pdf
Sudaryanto, M., Mardapi, D., & Hadi, S. (2019a). Multimedia-based online test on Indonesian language receptive skills development. Journal of Physics: Conference Series, 1339(1), 1–7. https://doi.org/10.1088/1742-6596/1339/1/012120
Sudaryanto, M., Mardapi, D., & Hadi, S. (2019b). How foreign speakers implement their strategies to listen Indonesian language? Journal of Advanced Research in Dynamical and Control Systems, 11(7), 355–361. https://www.jardcs.org/abstract.php?id=2551#
Sudaryanto, M., Ulya, C., Rohmadi, M., & Kuhafeesah, K. (2020). Inter-rater assessment on listening media for foreign language speakers. Proceedings of the 2nd Konferensi BIPA Tahunan by Postgraduate Program of Javanese Literature and Language Education in Collaboration with Association of Indonesian Language and Literature Lecturers, Surakarta: 9 November 2019. https://doi.org/10.4108/eai.9-11-2019.2295064
Sumintono, B. (2018). Rasch model measurements as tools in assessment for learning. Advances in Social Science, Education and Humanities Research, 17(3), 38–42. https://doi.org/10.2991/icei-17.2018.11
Triyono, Senam, Jumadi, & Wilujeng, I. (2017). The effects of creative problem solving-based learning towards students’ creativities. Jurnal Kependidikan: Penelitian Inovasi Pembelajaran, 1(2), 214–226. https://doi.org/10.21831/jk.v1i2.9429
Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., & Mislevy, R. J. (2000). Computerized adaptive testing: A primer. Routledge. https://doi.org/10.4324/9781410605931
Weiss, D. J., & Kingsbury, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21(4), 361–375. https://doi.org/10.1111/j.1745-3984.1984.tb01040.x
DOI: https://doi.org/10.20961/ijpte.v8i1.89913
Refbacks
- There are currently no refbacks.
Copyright (c) 2024 Memet Sudaryanto, Muhammad Nur Yasir Utomo
This work is licensed under a Creative Commons Attribution 4.0 International License.
|