Table 3

Summary of studies included in the narrative synthesis

Reference, countryAimStudy designParticipants and sampleFindings
1Boet et al,20
Canada
Compare effectiveness of an interprofessional within-team debriefing with instructor-led debriefing on team performance during simulated crisis.Randomised, controlled, repeated measures design. Teams randomised to within-team or instructor-led debriefing groups. After debriefing, teams managed different post-test crisis scenario. Sessions were video taped, and blinded expert examiners used TEAM scale to assess performance.n=120 (40 teams made up of 1 anaesthesia trainee, 1 surgical trainee, 1 staff circulating operating room nurse).Team performance significantly improved from pretest to post-test, regardless of type of debriefing (F1,38=7.93, p=0.008). No significant difference in improvement between within-team or instructor-led debriefing.
2Bond et al,14
USA
To assess learner perception of high-fidelity mannequin-based simulation and debriefing to improve understanding of ‘cognitive dispositions to respond’ (CDRs).Emergency medicine (EM) residents exposed to two simulations and block-randomised to technical/knowledge debriefing before completing written survey and interview with ethnographer. Four investigators reviewed interview transcripts and qualitatively analysed comments.n=62 EM residents.Technical debriefing was better received than cognitive debriefing. Authors theorise that an understanding of CDRs can be facilitated through simulation training.
3Brett-Fleegler et al,26
USA
Examine reliability of Debriefing Assessment for Simulation in Healthcare (DASH) scores in evaluating quality of healthcare simulation debriefings and whether scores demonstrate evidence of validity.Rater trainees familiarised with DASH before watching, rating and then discussing three separate course introductions and subsequent debriefings. Inter-rater reliability, intraclass correlations and internal consistency were calculated.n=114 international healthcare educators participated in 4.5-hour web-based interactive DASH rater training sessions (nurses, physicians, other health professionals and masters and PhD educators).Differences between the ratings of the three standardised debriefings were statistically significant p<0.001. DASH scores showed evidence of good reliability and preliminary evidence of validity.
4Forneris et al,24
USA
To investigate the impact of Debriefing for Meaningful Learning (DML) on clinical reasoning.Quasiexperimental pretest and post-test repeated measure design. Teams randomly assigned to DML or usual debriefing. Clinical reasoning was evaluated using the Health Sciences Reasoning Test (HSRT).n=153 Under Graduate (UG) nursing students (teams of 4).Significant improvement in HSRT mean scores for the intervention group (p=0.03) with control group non significant (NS). The change in HSRT mean scores between the intervention and control groups was not significant (p=0.09).
5Freeth et al,16
UK
Examination of participants perceptions of the multidisciplinary obstetric simulated emergency scenarios course (MOSES) designed to enhance Non Technical Skills (NTS) among obstetric teams/improve patient safety.Telephone (47) or email (8) interviews with MOSES course participants and facilitators and analysis of video-recorded debriefings.n=93 (senior midwives n=57, obstetricians n=21, obstetric anaesthetists n=15).Many participants improved their knowledge and understanding of interprofessional team working, especially communication and leadership in obstetric crisis situations. Participants with some insight into their non-technical skills showed the greatest benefit in learning. Interprofessional simulation is a valuable approach to enhancing non-technical skills.
6Geis et al,22USADefine optimal healthcare team roles and responsibilities, identify latent safety threats within the new environment and screen for unintended consequences of proposed solutions.Prospective pilot investigation using laboratory and in situ simulations totalling 24 critical patient scenarios conducted over four sessions (over 3 months).n=81 healthcare providers (predominantly nurses, paramedics and physicians).Mayo High Performing Team Scale (MHPTS) means were calculated for each phase of training. Simulation laboratory teamwork scores showed a mean of 18.1 for the first session and 18.9 for the second session (p=0.68). In situ teamwork scores showed a mean of 12.3 for the first session and 15 for the second session (p=0.25). Overall laboratory mean was 18.5 (SD 2.31) compared with overall in situ mean of 13.7 (SD 4.40), indicating worse teamwork during in situ simulation (p=0.008).
7Grant et al,30
USA
To compare the effectiveness of video-assisted oral debriefing (VAOD) and oral debriefing alone (ODA) on participant behaviour.Quasiexperimental pretest and post-test design. Teams were randomised to intervention (VAOD) or control (ODA). Behaviours were assessed using adapted Clinical Simulation Tool.n=48 UG nursing students: 24 intervention and 24 control (teams of 4 or 5 students).The VAOD group had higher mean score (6.62, SD 6.07) than the control group (4.23, SD 4.02), but this did not reach significance (p=0.11).
8Hull et al,17
UK
To explore the value of 360° evaluation of debriefing by examining expert debriefing evaluators, debriefers and learners’ perceptions of the quality of interdisciplinary debriefings.Cross-sectional observational study.
The quality of debriefing was assessed using the validated Objective Structured Assessment of Debriefing framework.
n=278 students, in 41 teams.Expert debriefing evaluators and debriefers’ perceptions of debriefing quality differed significantly; debriefers perceived the quality of debriefing they provided more favourably than expert debriefing evaluators. Learner perceptions of the quality of debriefing differed from both expert evaluators and debriefers’ perceptions.
9Kim et al,19
Korea
To compare the educational impact of two postsimulation debriefing methods: (focused and corrective feedback (FCF) versus structured and supported debriefing (SSD)) on team dynamics in simulation-based cardiac arrest team training.A pilot randomised controlled study.
Primary outcome: improvement in team dynamics scores between baseline and test simulation.
Secondary outcomes: improvements in team clinical performance scores, self-assessed comprehension of and confidence in cardiac arrest management and team dynamics.
N=95 4th year UG medical students randomly assigned to FCF or SSD; teams of 6.The SSD team dynamics score post-test was higher than at baseline (baseline: 74.5 (65.9–80.9), post-test: 85.0 (71.9–87.6), p=0.035). Scores for the FCF group did not improve from baseline to post-test.
No differences in improvement in team dynamics or team clinical performance scores between the two groups (p=0.328, respectively).
10Kolbe et al 18 2013
Switzerland
To describe the development of an integrated debriefing approach and demonstrate how trainees perceive this approach.Post-test-only (debriefing quality) and a pretest and post-test (psychological safety and leader inclusiveness), no-control group design.
Debriefing administered during a simulation-based combined clinical and behavioural skills training day for anaesthesia staff (doctors and nurses). Each trainee participated and observed in four scenarios and also completed a self-report debriefing quality scale.
n=61 (f4 senior anaesthetists, 29 residents, 28 nurses) from a teaching hospital in Switzerland participated in 40 debriefings resulting in 235 evaluations. All attended voluntarily and participated in exchange for credits.Utility of debriefings evaluated as highly positive, while pre–post comparisons revealed psychological safety and leader inclusiveness increased significantly after debriefings.
11Lammers et al,15
USA
To identify causes of errors during a simulated, prehospital paediatric emergency.Quantitative (cross-sectional, observation) and qualitative research. Crews participated in simulation using own equipment and drugs. Scoring protocol used to identify errors. Debriefing conducted by trained facilitator immediately after simulated event elicited root causes of active and latent errors.n=90 (m=67%, f=33%)
Two-person crews (45 in total) made up of: Emergency Medicine Technician (EMT)/paramedic, paramedic/paramedic, paramedic/specialist.
Simulation, followed immediately by facilitated debriefing, uncovered underlying causes of active cognitive, procedural, affective and teamwork errors, latent errors and error-producing conditions in EMS paediatric care.
12LeFlore and Anderson,23 USATo determine whether self-directed learning with facilitated debriefing during team-simulated clinical scenarios has better outcomes compared with instructor-modelled learning with modified debriefing.Participants randomised to either the self-directed learning with facilitated debriefing group (group A: seven teams) or instructor-modelled learning with modified debriefing group (group B: six teams). Tools assessed students’ pre/post knowledge (discipline-specific), satisfaction (5-point Likert scale/open-ended questions), technical and team behaviours.Convenience sample of students; nurse practitioner, registered nurse, social work, respiratory therapy. Thirteen interdisciplinary teams participated, with one student from each discipline per team.Group B was significantly more satisfied than group A (p=0.01). Group B registered nurses and social worker students were significantly more satisfied than group A (30.0±0.50 vs 26.2±3.0, p = 0.03 and 28.0±2.0 vs 24.0±3.3, p=0.04, respectively). Group B had significantly better scores than group A on 8 of the 11 components of the Technical Evaluation Tool; group B intervened more quickly. Group B had significantly higher scores on 8 of 10 components of the Behavioral Assessment Tool and overall team scores.
13Oikawa et al,32
USA
To determine if learner self-performance assessment (SPA) and team-performance assessment (TPA) were different when simulation-based education (SBE) was supported by self-debriefing (S-DB), compared with traditional facilitator-led debriefing (F-DB).Prospective, controlled cohort intervention study.
Primary outcome measures: SPA and TPA assessed using bespoke global rating scales with subdomains: patient assessment, patients treatment and teamwork.
n=57 postgraduate year 1 medical interns randomised to 9 F-DB and 10 S-DB.
Teams completed four sequential scenarios.
Learner SPA and TPA scores improved overall from the first to the fourth scenarios (p<0.05). F-DB versus S-DB cohorts did not differ in overall SPA scores.
14Reed,28
USA
To explore the impact on debriefing experience of three types of debrief: discussion only, discussion+blogging and discussion+journalling.Experimental design with random assignment.
Primary outcome measure: Debriefing Experience Scale (DES).
n=48 UG nursing students randomly assigned to ‘discussion’, ‘blogging’ or ‘journaling’.DES score highest for discussion only, followed by journaling and then blogging. Differences reached statistical significance for only 3 of the 20 DES items.
15Savoldelli et al 21 To investigate the value of the debriefing process during simulation and to compare the educational efficacy of oral and videotape-assisted oral feedback against no debriefing (control).Prospective, randomised, controlled, three-arm, repeated measures study design. After completing pretest scenario, participants randomly assigned to control, oral or videotape-assisted oral feedback condition. Debrief focused on non-technical skills performance followed by a post-test scenario. Trained evaluators scored participants using Anaesthesia Non-Technical Skills scoring system. Video tapes reviewed by two blinded independent assessors to rate non-technical skills.n=42 anaesthesia residents in postgraduate years 1, 2 and 4.Statistically significant improvement in non-technical skills for both oral and videotape-assisted oral feedback groups (p<0.005) but no difference between groups or improvement in control group. The addition of video review did not provide any advantage over oral feedback alone.
16Smith-Jentsch et al,25 USATo investigate the effects of guided team self-correction using an expert model of teamwork as the organising framework.Study 1: cohort design with data collected over 2 years. Year 1: data on 15 teams collected using existing Navy method of prebriefing and debriefing. Instructors then trained using guided team self-correction method. Year 2: data collected on 10 teams, briefed and debriefed by instructors trained from year 1.
Study 2: teams were randomly assigned to the experimental or control condition.
Study 1: n=385 male members of 25 US Navy submarine attack centre teams, teams ranged from 7 to 21 in size. Study 2: n=65 male lieutenants in the US Navy, randomly assigned to five-person teams.Teams debriefed using expert model-driven guided team self-correction approach developed more accurate mental models of teamwork (study 1) and demonstrated greater teamwork processes and more effective outcomes (study 2).
17Van Heukelom et al,27 USATo compare two styles of managing a simulation session: postsimulation debriefing versus insimulation debriefing.Observational study with a retrospective pre–post survey (using 7-point Likert scale) of student confidence levels, teaching effectiveness of facilitator, effectiveness of debriefing strategy and realism of simulation. Participants randomly assigned to either postsimulation or insimulation debriefing conditions.n=160 students (third year medical students enrolled in the ‘Clinical Procedures Rotation’).Statistically significant differences between groups. Students in the postsimulation debriefing ranked higher in measures for effective learning, better understanding actions and effectiveness of debrief.
18Zinns et al,29
USA
To create and assess the feasibility of a post resuscitation debriefing framework (Review the event, Encourage team participation, Focused feedback, Listen to each other, Emphasize key points, Communicate clearly, Transform the future - REFLECT).Feasibility pretest and post-test study.
Outcome measure: presence of REFLECT components as measured by the paediatric emergency medicine (PEM) fellows, team members and blinded reviewers.
n=9 PEM fellows completed the REFLECT training (intervention) and led teams of 4.Significant improvement in overall use of REFLECT reported by PEM fellows (63% to 83%, p<0.01) and team members (63% to 82%, p<0.001). Blinded reviewers found no statistical improvement (60% to 76%, p=0.09).