Evaluation of Student Competence in Simulation Following a Prebriefing Activity: A Pilot Study
Date of Award
Doctor of Philosophy
Sarah Morgan, Teresa Duchateau, Barbara Daley
Clinical Judgement, Nursing, Nursing Education, Prebriefing, Simulation
Background: Simulation-based learning (SBL) shows promise to potentially improve clinical competence in nursing education. The efficacy of evidence-based prebriefing activities and valid and reliable systems to evaluate those strategies is a gap in the literature. Preliminary evidence shows that prebriefing can improve participant outcomes. The goal of this pilot study was to compare the outcome of clinical competence for prelicensure nursing students based on assignment to one of the following prebriefing activities: standard, careplan, or concept mapping. Methods: This is a quasi-experimental double-blind, posttest only, comparison-group design, pilot study. The participants were from an associate degree professional nursing program. Out of a potential 30 students, 28 agreed to participate. The data collection occurred during two laboratory sessions of their medical-surgical course. The students were exposed to an assigned prebriefing activity and then engaged in a simulation scenario. Two faculty simulation evaluators (FSEs) watched the videoed performance and evaluated the students’ clinical competence using the Creighton Competency Evaluation Instrument (C-CEI). Demographic data were used to analyze the homogeneity of the groups and to determine if other factors affected clinical competence. An ANOVA was used to answer the research questions. Results: Based on the analysis, gender, age, course grade, race and ethnicity, the groups were similar. Interrater reliability of the C-CEI overall (Kappa=0.096 with p=0.02) and communication (Kappa=0.349 with p=0.01) scores between the FSEs were significantly different. Based on their Cronbach’s alpha score (0.74) FSE Two’s ratings were used for analysis. There were no significant changes in C-CEI scores based on the students’ assigned prebriefing activity. There were significant differences between participant scores (communication 4.3(26), p = <0.001; Clinical Judgement 2.7(26), p = 0.011; Overall 2.8(26), p = 0.01) based on their scenario. Conclusions: Issues with the FSFs and FSEs revealed ways to improved future simulation-based research. Ensuring scenario complexity is equivalent assures comparable participant performance. Measures to enhance FSE interrater reliability must be implemented. Limitations: The sample size was inadequate to determine statistically significant data. A lack of randomization of assignment to groups is also a limitation. An FSF provided additional cueing which could have affected some student’s C-CEI scores.
Beman, Sarah Black, "Evaluation of Student Competence in Simulation Following a Prebriefing Activity: A Pilot Study" (2017). Theses and Dissertations. 1585.