Nasir, Jauwairia
Refine
Has Fulltext
- yes (12)
Document Type
- Conference Proceeding (12) (remove)
Language
- English (12)
Role-playing activities offer opportunities for developing individuals’ creativity, communication, and problem-solving skills. Recent advances in large language models (LLM) facilitate fluent conversations with machines. To investigate benefits and pitfalls of LLMs in a relatively unexplored context of human-agent role-play as a culturally contextualised activity, a dataset of twelve human-agent interactions produced by two researchers with two state-of theart LLMs was annotated based on a frame analysis scheme from literature. The pilot study shows that human-agent play has a similar complexity as human human play in which players maintain identities of themselves, external observers and play characters simultaneously going beyond the pretend-reality dualism. Results suggest that, while the LLMs can maintain and shift between roles, they play some roles better than others, and display cultural and gender stereotypes. Additionally, the coding scheme shows potential to help identify LLM outputs that require embodied enactment, and to be used for LLM bench-marking for role-play.
Transactive discussion during collaborative learning is crucial for building on each other's reasoning and developing problem solving strategies. In a tabletop collaborative learning activity, student actions on the interface can drive their thinking and be used to ground discussions, thus affecting their problem-solving performance and learning. However, it is not clear how the interplay of actions and discussions, for instance, how students performing actions or pausing actions while discussing, is related to their learning. In this paper, we seek to understand how the transactivity of actions and discussions is associated with learning. Specifically, we ask what is the relationship between discussion and actions, and how it is different between those who learn (gainers) and those who do not (non-gainers). We present a combined differential sequence mining and content analysis approach to examine this relationship, which we applied on the data from 32 teams collaborating on a problem designed to help them learn concepts of minimum spanning trees. We found that discussion and action occur concurrently more frequently among gainers than non-gainers. Further we find that gainers tend to do more reflective actions along with discussion, such as looking at their previous solutions, than non-gainers. Finally, gainers discussion consists more of goal clarification, reflection on past solutions and agreement on future actions than non-gainers, who do not share their ideas and cannot agree on next steps. Thus this approach helps us identify how the interplay of actions and discussion could lead to learning, and the findings offer guidelines to teachers and instructional designers regarding indicators of productive collaborative learning, and when and how, they should intervene to improve learning. Concretely, the results suggest that teachers should support elaborative, reflective and planning discussions along with reflective actions.
In educational HRI, it is generally believed that a robots behavior has a direct effect on the engagement of a user with the robot, the task at hand and also their partner in case of a collaborative activity. Increasing this engagement is then held responsible for increased learning and productivity. The state of the art usually investigates the relationship between the behaviors of the robot and the engagement state of the user while assuming a linear relationship between engagement and the end goal: learning. However, is it correct to assume that to maximise learning, one needs to maximise engagement? Furthermore, conventional supervised models of engagement require human annotators to get labels. This is not only laborious but also introduces further subjectivity in an already subjective construct of engagement. Can we have machine-learning models for engagement detection where annotations do not rely on human annotators? Looking deeper at the behavioral patterns and the learning outcomes and a performance metric in a multi-modal data set collected in an educational human-human-robot setup with 68 students, we observe a hidden link that we term as Productive Engagement. We theorize a robot incorporating this knowledge will 1) distinguish teams based on engagement that is conducive of learning; and 2) adopt behaviors that eventually lead the users to increased learning by means of being productively engaged. Furthermore, this seminal link paves way for machine-learning models in educational HRI with automatic labeling based on the data.