Refine
Keywords
Medienbezogene Kompetenzen von Lehrkräften sind unabdingbar, um Schüler:innen auf Teilhabe, Partizipation und eine verantwortliche Mitgestaltung in einer Kultur der Digitalität vorzubereiten. Konkret benötigen Lehrkräfte hierzu medienbezogene Basis- und Lehrkompetenzen. Für die zielgerichtete und erfolgreiche Förderung dieser Kompetenzen im Rahmen der Lehrkräftebildung sind Instrumente zur Messung dieser Medienkompetenzen hilfreich. Bestehende Instrumente greifen allerdings zumeist auf einfache, nicht selten sehr allgemein formulierte Selbsteinschätzungen zurück. Dies erschwert eine valide Messung von Kompetenzen. In diesem Beitrag wird ein alternativer Messansatz vorgestellt, der Selbsteinschätzungen von Medienkompetenzen mittels Szenarien kontextualisiert und damit präzisiert. Die Szenarien-Basierung führt zu einem validen Messergebnis, hat aber für den Einsatz im Feld den Nachteil, dass die Messung deutlich umfangreicher ausfällt. Dem begegnen wir im vorliegenden Beitrag durch die Entwicklung einer Kurzskala in einem mehrstufigen Verfahren. Auf Basis eines Datensatzes von N = 552 (angehenden) Lehrkräften wird eine szenarienbasierte Kurzskala zur Messung instrumenteller und kritisch-reflexiver Medienkompetenzen sowie medienbezogener Lehrkompetenzen entwickelt. Diese Kurzskala wird in einer weiteren Studie mit N = 204 Lehrkräften nochmals validiert. Die Ergebnisse beider Studien unterstützen die Validität der Kurzskala. Für die Lehrkräftebildung und Forschung wird somit ein akzeptanzförderndes und ressourcenschonendes Instrument bereitgestellt, das eine reliable und valide Messung instrumenteller und kritisch-reflexiver Medienkompetenzen sowie medienbezogener Lehrkompetenzen erlaubt.
Teachers’ technology-related skills are often measured with self-assessments. However, self-assessments are often criticised for being inaccurate and biased. Scenario-based self-assessment is a promising approach to make self-assessment more accurate and less biased. In this study with N = 552 inservice and student teachers, we validated a scenario-based self-assessment instrument IN.K19+ for teachers. The instrument enables scenario-based self-assessment of instrumental and critical digital skills and technology-related teaching skills for teachers. In a confirmatory factor analysis, we show that the instrument has sufficient factorial validity. To test the predictive validity of the instrument, we examined the instruments’ relationship to the frequency of technology use during teaching and teacher-initiated student learning activities involving digital technologies. Results from structural equation modelling show that instrumental digital skills and technology-related teaching skills are positively related to the frequency of digital technology use during teaching, while critical digital skills are not. In terms of the initiation of student learning activities, instrumental and critical digital skills show relationships with initiating student learning activities that include lower cognitive engagement. Technology-related teaching skills are related to initiating learning activities that indicate higher cognitive engagement. The results show that instrumental and critical digital skills play an important role with respect to the basic use of digital technologies in the classroom, while technology-related teaching skills turn out to be crucial for more complex scenarios of digital technology use. This pattern of findings supports the predictive validity of the IN.K19+ instrument.
We propose a model of contextual facilitators for learning activities involving technology (in short: C♭-model) for both on-site and distance learning environments in higher education. The C♭-model aims at systematizing research on digital teaching and learning and offers a roadmap for future research to understand the complex dynamic of factors that lead to successful digital teaching and learning in higher education via suitable learning activities. First, we introduce students' learning outcomes as central benchmarks of teaching and learning with digital technologies in higher education. Second, we want to focus on a major proximal factor for students' learning outcomes and thus apply a learning activities perspective. Learning activities involving digital technologies reflect cognitive processes of students when using digital technologies and are causally connected with students' learning outcomes. Third, we highlight several contextual facilitators for learning activities involving technology in the C♭-model: learning opportunities that result from higher education teachers' instructional use of technology and students' self-arranged learning opportunities involving digital technologies. Apart from these proximal facilitators, we include more distal factors, namely, higher education teachers' knowledge, skills, and attitudes toward digital technology; higher education teachers' qualification; students' and teachers’ digital technology equipment; and institutional, organizational, and administrative factors.
Instruments that assess teachers' skills and attitudes on the basis of a broad range of specific standards and demands for teaching with digital technologies are lacking to date. Based on the K19 framework, we validated the scenario-based instrument IN.K19 that simultaneously assesses technology-related teaching skills and attitudes via self-assessment. In our study with N = 90 teachers and student teachers with teaching experience, we demonstrate that the instrument has satisfactory factorial validity in our confirmatory factor analyses. To investigate its predictive validity, we examined the instruments' relationships with teachers' frequency of technology use in class and teachers' initiation of different types of student learning activities involving technology. Results from structural equation modelling show relationships between self-assessed skills in different phases of teaching with technology and the self-reported initiation of student learning activities involving overt actions (active, constructive, and interactive learning activities), supporting the predictive validity of our instrument. Positive attitudes towards technology-related teaching also exhibit positive relationships with the initiation of learning activities involving digital technologies, but more specifically learning activities that do not include observable actions by learners (passive learning activities). Thus, teachers' self-assessed technology-related skills rather than attitudes might contribute to facilitating learning activities crucial for students’ learning.
This study investigated the initiation of digitally supported learning activities and personal and institutional factors associated with them in different higher education courses, based on the C♭-model. The C♭-model is a theoretical framework that systematizes contextual factors, which influence students‘ learning activities as the most important facilitator of students’ learning success. Using a self-assessment instrument with anchored scenarios in a sample of 1625 higher education teachers, we were able to identify three levels at which higher education teachers initiated digital learning activities: a low level (powerpointers), a moderate level (clickerers), and a high level (digital pros). The findings also support the relevance of the contextual factors specified in the C♭-model for initiating a high level of digital learning activities, namely digitalization policy and commitment of university administration, institutional equipment, technical and educational support, self-assessed basic digital skills, and self-assessed technology-related teaching skills. All of these factors explain a substantial amount of variance in the level of initiated digital learning activities. We conclude that a comprehensive approach rather than isolated measures might contribute to successful teaching and learning in higher education.
Technology has shown to be beneficial for initiating cognitive engagement. In the present study, cognitive engagement was conceptualized by the ICAP framework, proposing four levels of cognitive engagement (interactive, constructive, active, passive), which can be determined from observable student activities. To initiate cognitive engagement, teachers require diagnostic skills. With this study, we aimed to foster those skills. We designed and validated a simulation with N = 213 pre-service teachers to investigate the validity of the simulation. Moreover, we evaluated the difficulty of diagnosing the levels of cognitive engagement within planning and implementing lessons. We used linear regressions for the validation and confusion matrices for insights into the diagnostic process. The study results show a varying difficulty of diagnosing levels of cognitive engagement due to (a) challenges in inferring the involved cognitive processes and (b) different phases of teaching. Levels of cognitive engagement that require inferential processes to identify them are more difficult to diagnose. This highlights the importance of adding scaffolds to our simulation to help pre-service teachers understand the processes of generating knowledge and co-generating knowledge. More importantly, the study reveals shortcomings of the ICAP framework and presents first suggestions for its further development.