Show simple item record

dc.contributor.authorOnello, Rachel
dc.date.accessioned2015-07-01T14:18:15Z
dc.date.available2016-01-29T19:25:04Z
dc.date.issued2015
dc.identifier.urihttp://hdl.handle.net/10713/4617
dc.descriptionUniversity of Maryland, Baltimore. Nursing. Ph.D. 2015en_US
dc.description.abstractBackground: During a time of increased faculty shortages, nursing programs rely heavily on adjunct instructors to facilitate clinical learning experiences essential for fostering clinical reasoning and multiple ways of thinking. A significant component of clinical learning is the reflective feedback conversation that occurs between instructor and student. Quality feedback conversations use specific techniques that explicitly encourage students to reflect, analyze, and extrapolate learning to other contexts. Comparing actual performance to desired performance through the exploration and analysis of learner thoughts and actions is an effective modality to help close learning gaps and impact future performance. However, many clinical instructors responsible for facilitating feedback conversations are inadequately prepared to do so, lacking the pedagogical training necessary to maximize learning. Current resources for clinical instructors fail to provide a clear framework that guides instructors on how to facilitate feedback conversations for quality learning. Methods: To address the need for a tool that can guide the development of feedback skills and assess feedback behaviors among clinical instructors, The Feedback Assessment for Clinical Education (FACE) was developed. The FACE was designed to assess instructor behaviors associated with facilitating quality feedback conversations across clinical settings and disciplines. Using a multiphase approach, the FACE tool, comprising a rating form and rater handbook, was developed using Mezirow's Transformative Learning Theory, multidisciplinary research, and experts from the fields of education, organizational behavior, psychology, and health sciences. An iterative comparative process using theory and research guided the identification and development of key constructs associated with effective feedback conversations. Results: Qualitative content validity testing informed the operationalization of constructs into behavioral indicators and resulted in a six element behaviorally anchored rating scale. Quantitative content validity testing using Lawshe's Content Validity Ratio and the Content Validity Index suggest strong content validity at each level of the tool. Conclusions: This work offers a theory-based, research-driven tool to assess the quality of feedback in clinical settings and presents opportunities across education, research, and practice to enhance the current state of knowledge on best practices of feedback conversations. Future psychometric testing is needed to fully appreciate the potential of the FACE tool.en_US
dc.language.isoen_USen_US
dc.subjectassessment/evaluation of clinical performanceen_US
dc.subjectclinical educationen_US
dc.subjectcommunication skillsen_US
dc.subjectfaculty developmenten_US
dc.subject.meshFaculty, Nursing--educationen_US
dc.subject.meshFeedbacken_US
dc.titleAssessing the quality of feedback during clinical learning: Development of the Feedback Assessment for Clinical Education (FACE)en_US
dc.typedissertationen_US
dc.contributor.advisorRegan, Mary J.
dc.contributor.advisorJohantgen, Mary E.
dc.description.urinameFull Texten_US
refterms.dateFOA2019-02-19T18:07:43Z


Files in this item

Thumbnail
Name:
Onello_umaryland_0373D_10603.pdf
Size:
48.75Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record