2021 EDSIG Proceedings: Abstract Presentation
Do Leaders Articulate Differently? Identifying and Validating Project Team Leaders Through Text Analysis of Peer Evaluations of Team-Member Contributions
Dmytro Babik
James Madison University
Iryna Babik
Boise State University
Abstract
Identifying and rewarding team leaders stimulates students’ engagement and active contributions in team projects and enhances learning experience. One important tool that helps identify team leaders is peer evaluations of team-member contributions. The quantitative components of such peer evaluations (such as rating and ranking) may help quickly spot aspiring team leaders. However, richer evidence and important insights into team dynamics and interactions that spawn leaders can only be obtained through examination of qualitative textual peer comments. The latter may be a challenge in large classes and without technology aids. Also, the assessment of textual comments may be skewed by the instructor’s subjective judgements and biases (lenient interpretation of comments producing “false positives”, and strict interpretation resulting in “false negatives”), and students’ attempts to “game the system”.
The purpose of this study is to propose and validate a data-centric approach for identifying team leaders in the project-based learning context through automated semantic text analysis of student comments in peer evaluations of team-member contributions. We developed a statistical model based on quantitative semantic text analysis variables (Activity, Optimism, Certainty, Realism, and Commonality, as well as 35 sub-variables) produced by the Diction software. We also conducted an initial validation of the model using data gathered during six semesters from the undergraduate Systems Analysis course (300 students in total, four projects per semester, about 1000 individual observations). Each semester, students completed four team projects; in each project, a student was placed in a new team; in some projects, teams were assigned by the instructor, while in others, teams were self-organized. At the end of each project, each student completed peer evaluations using Mobius SLIP software, which included rating each team member (including oneself) and providing a comment to justify the rating (including a self-reflection on one’s own contribution). The instructor examined the aggregate quantitative indicators to detect candidate leaders, and confirmed each candidate by examining qualitative data on received peer comments. Specifically, the instructor searched for repetitive indicative keywords and phrases, such as “leader”, “lead”, “organized”, “motivated”, “managed”, “contributed most”, while considering the meaning and emotional load of the phrases. The confirmed team leaders were given bonus credit and publicly praised in class, supposedly cultivating thrive for project engagement and leadership.
Collected data include: (a) a binary variable for leadership status determined by instructor; (b) quantitative semantic text analysis variables produced by the Diction software (five core semantic variables and 35 sub-variables) for peer comments (structured as comments given, comments received, and self-reflection). First, a discriminant analysis was conducted to investigate whether the semantic variables can predict a student’s leadership status (as determined by the instructor). Second, leaders and non-leaders were compared on a combination of semantic variables using MANOVA to determine whether semantic structures and content of peer comments by recognized team leaders significantly differ from those by other team members.
A further investigation in this study will empirically validate the proposed model based on collected data and suggest a practical, actionable approach to implementing this analysis in real time in the classroom.