Assessing the quality of feedback during clinical learning: Development of the Feedback Assessment for Clinical Education (FACE)
Abstract
Background: During a time of increased faculty shortages, nursing programs rely heavily on adjunct instructors to facilitate clinical learning experiences essential for fostering clinical reasoning and multiple ways of thinking. A significant component of clinical learning is the reflective feedback conversation that occurs between instructor and student. Quality feedback conversations use specific techniques that explicitly encourage students to reflect, analyze, and extrapolate learning to other contexts. Comparing actual performance to desired performance through the exploration and analysis of learner thoughts and actions is an effective modality to help close learning gaps and impact future performance. However, many clinical instructors responsible for facilitating feedback conversations are inadequately prepared to do so, lacking the pedagogical training necessary to maximize learning. Current resources for clinical instructors fail to provide a clear framework that guides instructors on how to facilitate feedback conversations for quality learning. Methods: To address the need for a tool that can guide the development of feedback skills and assess feedback behaviors among clinical instructors, The Feedback Assessment for Clinical Education (FACE) was developed. The FACE was designed to assess instructor behaviors associated with facilitating quality feedback conversations across clinical settings and disciplines. Using a multiphase approach, the FACE tool, comprising a rating form and rater handbook, was developed using Mezirow's Transformative Learning Theory, multidisciplinary research, and experts from the fields of education, organizational behavior, psychology, and health sciences. An iterative comparative process using theory and research guided the identification and development of key constructs associated with effective feedback conversations. Results: Qualitative content validity testing informed the operationalization of constructs into behavioral indicators and resulted in a six element behaviorally anchored rating scale. Quantitative content validity testing using Lawshe's Content Validity Ratio and the Content Validity Index suggest strong content validity at each level of the tool. Conclusions: This work offers a theory-based, research-driven tool to assess the quality of feedback in clinical settings and presents opportunities across education, research, and practice to enhance the current state of knowledge on best practices of feedback conversations. Future psychometric testing is needed to fully appreciate the potential of the FACE tool.Description
University of Maryland, Baltimore. Nursing. Ph.D. 2015Keyword
assessment/evaluation of clinical performanceclinical education
communication skills
faculty development
Faculty, Nursing--education
Feedback