Refine
Year of publication
Document Type
- Article (144)
- Conference Proceeding (11)
- Part of a Book (2)
- Preprint (2)
- Report (2)
- Book (1)
Keywords
- General Chemistry (14)
- General Physics and Astronomy (14)
- Electronic, Optical and Magnetic Materials (11)
- Electrical and Electronic Engineering (9)
- General Medicine (9)
- Materials Chemistry (9)
- Mechanical Engineering (9)
- Public Health, Environmental and Occupational Health (8)
- Education (7)
- Oncology (7)
Institute
- Medizinische Fakultät (63)
- Mathematisch-Naturwissenschaftlich-Technische Fakultät (62)
- Institut für Physik (56)
- Universitätsklinikum (48)
- Lehrstuhl für Experimentalphysik IV (45)
- Fakultät für Angewandte Informatik (20)
- Institut für Informatik (20)
- Empirische Bildungsforschung (16)
- Lehrstuhl für Learning Analytics and Educational Data Mining (16)
- Philosophisch-Sozialwissenschaftliche Fakultät (16)
Simulation-based learning is being increasingly implemented across different domains of higher education to facilitate essential skills and competences (e.g. diagnostic skills, problem-solving, etc.). However, the lack of research that assesses and compares simulations used in different contexts (e.g., from design perspective) makes it challenging to effectively transfer good practices or establish guidelines for effective simulations across different domains. This study suggests some initial steps to address this issue by investigating the relations between learners' experience in simulation-based learning environments and learners' diagnostic accuracy across several different domains and types of simulations, with the goal of facilitating cross-domain research and generalizability. The findings demonstrate that used learners' experience ratings are correlated with objective performance measures, and can be used for meaningful comparisons across different domains. Measures of perceived extraneous cognitive load were found to be specific to the simulation and situation, while perceived involvement and authenticity were not. Further, the negative correlation between perceived extraneous cognitive load and perceived authenticity was more pronounced in interaction-based simulations. These results provide supporting evidence for theoretical models that highlight the connection between learners' experience in simulated learning environments and their performance. Overall, this research contributes to the understanding of the relationship between learners’ experience in simulation-based learning environments and their diagnostic accuracy, paving the way for the dissemination of best practices across different domains within higher education.
Collaborations between researchers and practitioners have recently become increasingly popular in education, and educational design research (EDR) may benefit greatly from investigating such partnerships. One important domain in which EDR on collaborations between researchers and practitioners can be applied is research on simulation-based learning. However, frameworks describing both research and design processes in research programs on simulation-based learning are currently lacking. The framework proposed in this paper addresses this research gap. It is derived from theory and delineates levels, phases, activities, roles, and products of research programs to develop simulations as complex scientific artifacts for research purposes. This dual-level framework applies to research programs with a research committee and multiple subordinate research projects. The proposed framework is illustrated by examples from the actual research and design process of an interdisciplinary research program investigating the facilitation of diagnostic competences through instructional support in simulations. On a theoretical level, the framework contributes primarily to the literature of EDR by offering a unique dual-level perspective. Moreover, on a practical level, the framework may help by providing recommendations to guide the research and design process in research programs.
Recently, the option to use large language models as a middleware connecting various AI tools and other large language models led to the development of so-called large multimodal foundation models, which have the power to process spoken text, music, images and videos. In this overview, we explain a new set of opportunities and challenges that arise from the integration of large multimodal foundation models in education.
Background
Artificial intelligence, particularly natural language processing (NLP), enables automating the formative assessment of written task solutions to provide adaptive feedback automatically. A laboratory study found that, compared with static feedback (an expert solution), adaptive feedback automated through artificial neural networks enhanced preservice teachers' diagnostic reasoning in a digital case-based simulation. However, the effectiveness of the simulation with the different feedback types and the generalizability to field settings remained unclear.
Objectives
We tested the generalizability of the previous findings and the effectiveness of a single simulation session with either feedback type in an experimental field study.
Methods
In regular online courses, 332 preservice teachers at five German universities participated in one of three randomly assigned groups: (1) a simulation group with NLP-based adaptive feedback, (2) a simulation group with static feedback and (3) a no-simulation control group. We analysed the effect of the simulation with the two feedback types on participants' judgement accuracy and justification quality.
Results and Conclusions
Compared with static feedback, adaptive feedback significantly enhanced justification quality but not judgement accuracy. Only the simulation with adaptive feedback significantly benefited learners' justification quality over the no-simulation control group, while no significant differences in judgement accuracy were found.
Our field experiment replicated the findings of the laboratory study. Only a simulation session with adaptive feedback, unlike static feedback, seems to enhance learners' justification quality but not judgement accuracy. Under field conditions, learners require adaptive support in simulations and can benefit from NLP-based adaptive feedback using artificial neural networks.
As digitalization progresses and technologies advance rapidly, digital simulations offer great potential for learning professional practices in contexts such as medical or teacher higher education. The technological advancements increasingly facilitate the personalization of learning support to meet the individual needs of learners, whose diverse prerequisites influence their learning processes, activities, and outcomes. However, systematic approaches to combining technologies with educational theories and evidence are scarce. In this article, we propose to use data on relevant learning prerequisites and learning processes as a basis for personalizing feedback and scaffolding to facilitate learning with simulated practice representations. We connect theoretical concepts with methodological and technical approaches (e.g., using artificial intelligence) for modeling important learner variables as a basis for personalized learning support. The interplay between the learner and the simulation environment is outlined in a conceptual framework which may guide systematic research on personalized learning support in digital simulations.
Scientific reasoning and argumentation (SRA) skills are crucial in higher education, yet comparing studies on these skills remains challenging due to the scarcity of well-developed SRA-tests with robust psychometric properties. In this paper, the case-based ASSESSRA approach is proposed to evaluate university students’ SRA-skills, focusing specifically on the skills evidence evaluation and drawing conclusions. A prototype constructed using this approach in an educational context demonstrated reliability within an expert panel (n = 9; ICC = .81). In a subsequent study, the validity of the ASSESSRA approach was examined with 207 students, a partial-credit-model exhibited an acceptable fit, demonstrating no significant outfit and excellent distribution of ability parameters and Thurstonian thresholds. The ASSESSRA-prototype, coupled with provided guidelines, offers a versatile framework for developing comparable SRA-tests across diverse domains. This approach not only addresses the current gap in SRA assessment instruments but also holds promise for enhancing the understanding and promotion of SRA-skills in higher education.