Harnessing Encoder-based Transformer Models for Multilingual Student Feedback Sentiment Analysis

Authors

  • Majdah Alvi The Islamia University of Bahawalpur
  • Prof. Dr. Muhammad Ali Qureshi The Islamia University of Bahawalpur
  • Dr. Muhammad Bux Alvi

Keywords:

Encoder-based Transformer Models, Large Language Models, Multilingual Sentiment Analysis, Student Feedback, Textual Data

Abstract

Encoder-based Transformer models (ETMs) have demonstrated outstanding performance in sentiment analysis, comparable to human capability. ETMs can be applied to understand the students' perspectives that they provide through textual feedback using semi-automated feedback collection methods. The semi-automatic feedback collection includes online social networks, University blogs, personal interviews, and Google Forms. However, analyzing multilingual student feedback is challenging, especially when it includes code-switched expressions. The difficulty becomes twofold when the feedback contains resource-poor language scripts. Traditionally, student feedback was processed using a rule-based approach or traditional machine learning algorithms (MLAs), which proved inadequate. Deep neural networks and their variants proved better but required labeled training data. Encoder-based transformer pre-trained models have shown promising results. However, their performance in multilingual datasets (English, Urdu, and Roman Urdu) remains suboptimal. This study collected a multilingual dataset comprising 11,686 student feedback samples and explored the potential of encoder-based transformer models in automating multilingual sentiment analysis using student feedback. The dataset consisted of 33.2% English sentences, 33.3% Urdu sentences, and 33.5% Roman Urdu student feedback expressions. The applied ETM model identified 5,730 sentences as positive, 4,232 as negative, and 1,724 as neutral, achieving enhanced performance compared to the generic ETM. Additionally, it identifies challenges and limitations in multilingual feedback analysis and offers recommendations for model selection and fine-tuning.

Downloads

Published

2026-03-31