Article Text

other Versions

PDF
Teamwork evaluation during emergency medicine residents’ high-fidelity simulation
  1. Francesca Innocenti,
  2. Elena Angeli,
  3. Andrea Alesi,
  4. Margherita Scorpiniti,
  5. Riccardo Pini
  1. High-Dependency Unit, Department of Clinical and Experimental Medicine, Azienda Ospedaliero-Universitaria Careggi, Firenze, Italy
  1. Correspondence to Francesca Innocenti, High-Dependency Unit, Department of Clinical and Experimental Medicine, Azienda Ospedaliero-Universitaria Careggi, Lg Brambilla 3, Firenze 50134, Italy; francescaeluigi{at}libero.it

Abstract

Background Teamwork training has been included in several emergency medicine (EM) curricula; the aim of this study was to compare different scales’ performance in teamwork evaluation during simulation for EM residents.

Methods In the period October 2013–June 2014, we performed bimonthly high-fidelity simulation sessions, with novice (I–III year, group 1 (G1)) and senior (IV–V year, group 2 (G2)) EM residents; scenarios were designed to simulate management of critical patients. Videos were assessed by three independent raters with the following scales: Emergency Team Dynamics (ETD), Clinical Teamwork Scale (CTS) and Team Emergency Assessment Measure (TEAM). In the period March–June, after each scenario, participants completed the CTS and ETD.

Results The analysis based on 18 sessions showed good internal consistency and good to fair inter-rater reliability for the three scales (TEAM, CTS, ETD: Cronbach's α 0.954, 0.954, 0.921; Intraclass Correlation Coefficients (ICC), 0.921, 0.917, 0.608). Single CTS items achieved highly significant ICC results, with 12 of the total 13 comparisons achieving ICC results ≥0.70; a similar result was confirmed for 4 of the total 11 TEAM items and 1 of the 8 total ETD items. Spearman's r was 0.585 between ETD and CTS, 0.694 between ETD and TEAM, and 0.634 between TEAM and CTS (scales converted to percentages, all p<0.0001). Participants gave themselves a better evaluation compared with external raters (CTS: 101±9 vs 90±9; ETD: 25±3 vs 20±5, all p<0.0001).

Conclusions All examined scales demonstrated good internal consistency, with a slightly better inter-rater reliability for CTS compared with the other tools.

Statistics from Altmetric.com

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.