Show, don’t tell : developing and validating a role-play-based simulation (RobS) for the assessment of pre-service EFL teachers feedback competence on writing / Thomas Janzen. Berlin : Logos Verlag, [2025], © 2025
Inhalt
- 1 Introduction
- Part I - (EFL) Teacher Education in Germany: From Developing Competence to Assessing Performance
- 2 Structural Elements of Teacher Education in Germany
- 3 Competence of EFL Teachers
- 3.1 Approaching the Terminology of Competence
- 3.2 Models of Competence in Teacher Education
- 3.2.1 Model of Professional Competence
- 3.2.2 Competence as Knowledge
- 3.2.3 Competence as a Continuum
- 3.3 Frameworks for (EFL) Teacher Education in Germany
- 3.4 Empirical Findings on Professional Knowledge of Pre-Service EFL Teachers
- 3.5 Competence between Knowledge and Practice
- 4 A Performance-Oriented Perspective on Constructive Alignment in the First Phase of EFL Teacher Education
- 4.1 Constructive Alignment
- 4.2 Learning Objectives: The Core Practice Approach in EFL Teacher Education
- 4.3 Teaching and Learning Activities: Simulation-based Learning
- 4.3.1 Characteristics of Simulations
- 4.3.2 Typology of Simulations
- 4.3.3 Perception of Simulations and their Effectiveness for Learning
- 4.4 Assessment: RobS in EFL Teacher Education
- 4.4.1 Concepts of Assessment in Higher Education
- 4.4.2 Assessment in Higher Education: Students’ Perceptions and Impact on Learning
- 4.4.3 Simulation-based Assessment in Higher and Teacher Education
- 4.5 Addressing the Assessment Gap in Teacher Education
- 5 Summary
- Part II - Developing a Role-Play-Based Simulation (RobS) to Assess Feedback Competence on Writing in the EFL context
- 6 The Relevance of Feedback on Writing in the EFL context
- 6.1 Feedback
- 6.1.1 On the History and Terminology of Feedback
- 6.1.2 Feedback as Literacies
- 6.1.3 Relevance of Feedback in (EFL) Teacher Education
- 6.2 Writing in the EFL classroom
- 6.2.1 Cognitive Process Models for (L1-) Writing
- 6.2.2 Approaches to Writing in the EFL classroom
- 6.2.3 Statutory Guidelines for Learning and Teaching Writing
- 6.3 Synthesis: Feedback on Writing
- 7 Developing a Framework for Feedback Competence on Writing in the EFL context
- 7.1 Methodological Reflections on Theoretical Research
- 7.2 Instruments for the Evaluation of Feedback Performance
- 7.3 The Framework
- 7.4 Discussion and Limitations of the Framework
- 8 A RobS to Assess Feedback Competence on Writing in the EFL context
- 9 Summary
- Part III - Methodology: Investigating the Validity of the RobS
- 10 Criteria for Test Evaluation
- 10.1 Overview of Test Criteria
- 10.1.1 Secondary Criteria: Practicality, Fairness and Authenticity
- 10.1.2 Primary Criteria: Objectivity and Reliability
- 10.2 From Validity to Validation
- 10.2.1 Traditional Approaches to Validity
- 10.2.2 Messick’s Approach to Construct Validity
- 10.2.3 The Argument-based Approach to Validation
- 10.3 Implications for the Validation Process
- 11 IUA for the RobS
- 11.1 A Classification System for Validation by Schreiber & Gut (2022)
- 11.2 Making the Claims: The IUA and Sources of Evidence for the Test
- 12 Summary
- Part IV - Empirical Findings: Results of the Investigations
- 13 Research Design and Overview
- 14 (Pre-)Pilot Phase and Survey: Actor Training and Material Development
- 14.1 Pre-Pilot Phase
- 14.2 Pilot Survey
- 14.2.1 Data Collection and Sample Characteristics
- 14.2.2 Investigating the Verbal Triggers
- 14.2.3 Deciding on the Test Documents
- 14.2.4 Deciding on the Research Procedure
- 14.3 Summary
- 15 Main Survey
- 15.1 Data Collection and Sample Characteristics
- 15.2 Study 1: Pre-service EFL Teachers’ Perspective on RobS as Assessment
- 15.3 Study 2: Expert Interviews
- 15.3.1 Method: Expert Interviews
- 15.3.2 Study Design and Data Processing
- 15.3.3 Results of the Qualitative Expert Interviews
- 15.3.4 Discussion of the Results and Modifications to the Scoring Instrument
- 15.3.5 Limitations
- 15.3.6 Summary
- 15.4 Study 3: Scoring Reliability
- 15.4.1 Scoring Reliability on the Item Level
- 15.4.2 Scoring Reliability on the Test Score Level
- 15.4.3 Limitations and Implications from the Reliability Investigations
- 15.4.4 Scoring Independence from Actors
- 15.4.5 Summary
- 15.5 Study 4: The Relationship between Test Score and Other Variables
- 16 Summary
- Part V – Discussion: Evaluating the Validity of the RobS
- 17 Evaluating the Claims: The Validity Argument
- 17.1 Claim 1: Model
- 17.2 Claim 2: Test
- 17.3 Claim 3: Performance
- 17.4 Claim 4: Item Score
- 17.5 Claim 5: Test Score
- 17.6 Claim 6: Latent Trait
- 17.7 Claim 7: Interpreted Trait
- 17.8 Claim 8: Use
- 17.9 Conclusion: To use, or not to use?
- 18 Looking Back – Looking Forward
- 19 References
- List of Abbreviations
- List of Figures
- List of Tables
- Appendix
- A. Test Material
- A.1. M0: Vignette
- A.2. M1: Learner text
- A.3. M2: The Task for the Text
- A.4. M3: Excerpt from the Series of Lessons
- A.5. M4: Sample Text from the Series of Lessons
- A.6. M5: Empty Sheet for Notes
- A.7. Role description of Mia (German)
- A.8. Text Sample A from the Pilot Survey
- B. Rating manual
- C. Demographic and Academic Data Collected in the Written Survey
- D. Interview Guidelines
- D.1. Pre-pilot Phase: Expert Interviews
- D.2. Pre-pilot Phase: PSETs Interviews
- D.3. Pilot Survey
- D.4. Main Survey
- D.5. Expert Interview
- E. Transcription Guidelines
- F. Coding Manuals
- G. Coding Samples
- G.1. Stimulated Recall
- G.2. Simulation
- G.3. Guideline Interview– Main Survey
- G.4. Expert Interview – Part 1
- G.5. Expert Interview – Part 2
- H. Statistical Information
- H.1. Intercoding Sub-Codes PSETs Guideline Interview
- H.2. Discrepancies in Inter-Rating I
- H.3. Inter-Item Correlation Matrix for 27 Items
- H.4. Test Score Frequencies
- H.5. Normal Distribution in the Actor-specific Groups
- H.6. Further Statistical Information on SPK, PCK and CK /LP
- I. Digital Supplement
