Article Text

Download PDFPDF

Repetitive simulation is an effective instructional design within a pediatric resident simulation curriculum
  1. Selin Tuysuzoglu Sagalowsky1,
  2. Kimball A Prentiss2,
  3. Robert J Vinci3
  1. 1 Departments of Emergency Medicine and Pediatrics, Columbia University College of Physicians and Surgeons, NewYork-Presbyterian Morgan Stanley Children’s Hospital, New York, New York, USA
  2. 2 Department of Emergency Medicine, Baystate Medical Center at Tufts University School of Medicine, Springfield, Massachusetts, USA
  3. 3 Department of Pediatrics, Boston University School of Medicine, Boston, Massachusetts, USA
  1. Correspondence to Dr Selin Tuysuzoglu Sagalowsky, Departments of Emergency Medicine and Pediatrics, Columbia University College of Physicians and Surgeons, NewYork-Presbyterian Morgan Stanley Children’s Hospital, New York, NY 10032, USA; ss4588{at}cumc.columbia.edu

Abstract

Introduction Repetitive paediatric simulation (scenario-debrief-scenario; RPS) is an instructional design that allows immediate application of learner-directed feedback, in contrast to standard simulation (scenario-debrief; STN). Our aim was to examine the impact of RPS embedded within a paediatric resident simulation curriculum, comparing it to STN.

Methods In this prospective educational cohort study, paediatric residents were enrolled in STN (n=18) or RPS (n=15) groups from August 2012 through June 2013. Each group performed an initial high-fidelity simulation and another after 1–2 weeks. Attitudes, confidence and knowledge were assessed using anonymous surveys with each scenario and at 4–6 months. Skills were assessed in real time with a modified Tool for Resuscitation Assessment Using Computerised Simulation (TRACS). Two blinded reviewers assessed a subset of videotaped scenarios for TRACS inter-rater reliability.

Results Both STN and RPS designs were rated highly. The curriculum led to significant short-term and long-term improvements in confidence, knowledge and performance, with no significant differences between groups. All final respondents reported that they would prefer RPS to STN (n=6 STN, 4 RPS). TRACS intraclass correlation was 0.87 among all reviewers.

Conclusions Paediatric residents reported preference for RPS over STN, with comparable impacts on confidence, knowledge and performance. The modified TRACS was a reliable tool to assess individual resident performance. Further research is needed to determine whether RPS is a more effective instructional design for teaching resuscitation skills to paediatric residents.

  • resident education
  • repetitive simulation
  • assessment tools
  • instructional design

Statistics from Altmetric.com

Introduction

During core paediatric training, resuscitation experience is limited by case rarity, medical education trends restricting house staff involvement and work hour requirements that limit clinical time with patients.1 The vast majority of graduating paediatric residents lack experience leading resuscitations and are deficient in advanced life support skills.2 3 Simulation may bridge this educational gap by allowing for the deliberate practice of rare events and has been credited for an ‘educational revolution’ in paediatrics.4 5 However, best practices in simulation education remain largely unknown.

A meta-analysis by Cheng et al found that compared with no intervention, paediatric simulation-based education produced significant improvements in knowledge, performance in simulated settings, behaviours with patients and time to task completion.4 However, the authors highlight a paucity of studies comparing simulation-based instructional designs and call for future work to compare different forms of simulation-based education. In a recent expert review of simulation-based research design in paediatrics, Cheng et al similarly emphasise the need for comparative research of optimal instructional designs for specific learners, learning objectives and environments.6 The authors underscore the need to develop clear outcome measures and validated paediatric assessment tools for simulation-based educational interventions.

Repetitive practice is an instructional design rooted in adult educational theory, but with limited data in simulation-based education. A single study of repetitive paediatric simulation (RPS)—which involves performing a simulation scenario, debriefing and then immediately repeating the scenario—found that it improved paediatric residents’ immediate, self-assessed confidence, knowledge and skills over standard simulation (STN), though did not assess these effects over time or rate performance with objective measures.7 STN typically terminates with debriefing, thereby delaying the application of knowledge and corrected behaviour until the next simulation or clinical encounter. In contrast, RPS allows learners to immediately incorporate feedback, a model consistent with the adult learning theories of error management and Kolb’s Experiential Learning Cycle.7–9

The primary objective of this study was to compare RPS to STN designs with respect to paediatric resident confidence, knowledge and objective performance over time. Secondarily, we sought to determine the reliability of a modified paediatric simulation-based assessment tool in evaluating individual resident performance. To our knowledge, this is only the second study of RPS. It builds on prior RPS research by examining whether or not the effects on confidence and knowledge decay over time and by assessing objective performance with a validated instrument.

Methods

Participants

We conducted a prospective educational cohort study comparing simulation designs, nested within a simulation curriculum at the Boston Combined Residency Programme.10 Paediatric residents rotating through Boston Medical Center and participating in educational simulations from August 2012 through June 2013 were offered voluntary enrolment. All participants signed informed consent prior to participation.

Participants were informed that the study compared debriefing methodologies but were blinded to the hypothesis being tested. Study investigators were not blinded to group assignment or study design; two study investigators (STS and KAP), along with a third paediatric emergency physician and residency Associate Programme Director, served as simulation instructors.

Study design

Residents were assigned to either study (RPS) or standard (STN) groups. We alternated months in which RPS and STN were performed. We randomly selected whether to begin the study with an RPS or STN month; however, subsequent group assignment was not randomised, as study participants were recruited from a convenience sample of residents rotating at Boston Medical Center in any given month.

Both groups received a 20 min standardised introduction with survey administration, followed by a 10 min case. The STN group received 20 min of debriefing, whereas the RPS group received 15 min of debriefing followed by a 5 min repeat of the original case, without a second debriefing.

A triad of residents (Team Leader, Airway Manager and Circulation Manager) performed high-fidelity simulations (SimBaby; Laerdal Medical, Stavanger, Norway) at baseline and again after 1–2 weeks. Roles were randomly assigned by investigators to ensure a senior-level Team Leader in each group by stratifying according to training level and using a list randomiser to assign roles in the pattern of ‘Team Leader, Airway Manager and Circulation Manager’. Additional residents and medical students participated in supporting roles for educational purposes but were not enrolled in the study.

Debriefing was standardised and emphasised advocacy inquiry, following the ‘Debriefing with Good Judgement’ model developed by Rudolph et al 11 in which all instructors received prior training.12 Throughout the study, we utilised ‘Case Four’ (Pulseless Electrical Activity) of our simulation curriculum, with variable lead-in case narratives.10 For the first half of the study, simulations were performed in situ in the paediatric intensive care unit and subsequently moved to the hospital’s dedicated simulation centre, in which rooms and equipment mirror inpatient design.

Study instruments

Participants completed anonymous surveys and knowledge assessments before and after each simulation and again at 4–6 month follow-up. Residents were incentivised to return the follow-up survey with US$20 Amazon gift cards. Data collected included demographic information, training year, prior resuscitation training and experience, resuscitation-based confidence, knowledge and skills and attitudes towards simulation.

To assess the acceptability of each instructional design, we queried self-reported confidence, knowledge, skills and attitudes with five-point Likert Scales adapted from the prior RPS study.7 Skills-specific confidence was further assessed with eight questions anchored on five-point Likert Scales and adapted from the objective performance tool (Tool for Resuscitation Assessment Using Computerised Simulation (TRACS)) described below.13 Investigators modelled the eight-question knowledge assessment on the Paediatric Advanced Life Support (PALS) certification examination, though one question was discarded from analysis due to changes in resuscitation guidelines.

Simulation instructors rated individual residents’ real-time performance in simulated resuscitations (initial scenario only for the RPS group) with modified versions of the validated TRACS.13 The TRACS comprises four domains (Basics, Airway, Circulation and Arrhythmias and Behaviour), with intraclass correlation ranging from 0.7 to 0.76 for each domain. The original tool lists 72 items, and resident performance in each domain is scored as a percentage of items marked as ‘yes’. We utilised the TRACS for its intended purpose of real-time assessment, as performed by our simulation instructors.13 We modified the tool to suit the learning objectives of our scenario; thus, eliminating 32 inapplicable items related to intubation and defibrillation skills. Because each resident performed only one role, they were scored only in the applicable TRACS domain(s). We awarded points for a certain skill if directed by the designated team member, even if not performed directly by him/her. Subsequently, two blinded reviewers (paediatric emergency medicine fellows) reviewed a randomly selected subset of eight videos (four STN and four RPS) using our modified TRACS to score individual resident performance. They were compensated with US$250 each for their video reviews.

Statistical analysis

Data were examined for normality, and subsequent analyses were performed with both parametric and non-parametric tests. We used the Mann-Whitney U test to analyse residents’ self-assessed improvements and attitudes towards simulation, paired t-tests to analyse improvements in confidence, knowledge and performance over time, independent sample t-tests to compare improvements between groups and intraclass correlation coefficients for TRACS reliability. All statistical analyses were performed using SPSS V.16.

Results

A total of 33 residents participated in the study (table 1), with power to detect a 40% difference between RPS and STN groups as per existing RPS research.

Table 1

Resident characteristics

Residents’ self-assessed improvements and attitudes about the simulation experience were rated highly for both STN and RPS designs, with no significant differences between groups (tables 2 and 3).

Table 2

Self-assessed improvements

Table 3

Resident attitudes towards simulation

Residents demonstrated significant improvements in confidence and knowledge after each scenario (table 4). These improvements were retained after 4–6 months, although only 11 participants responded to the follow-up assessment. Similarly, objective performance of resuscitation skills, as measured by the TRACS, improved significantly between the first simulation and 2 week follow-up.

Table 4

Confidence, knowledge, and performance improvements among all residents

Subgroup analysis comparing STN and RPS designs revealed some differences between the two groups’ improvements in confidence, knowledge and performance over time (table 5). Specifically, the STN group did not demonstrate significant gains in medical knowledge with the second simulation, and the RPS group’s gains in long-term confidence and knowledge appear non-significant. However, there were no significant differences in the magnitudes of each group’s improvements for all measures.

Table 5

Confidence, knowledge, and performance improvements comparing groups

In order to maintain participant blinding to our hypothesis, we did not query instructional design preference until the final follow-up survey. Although only 10 participants responded to this question (six STN and four RPS), all reported that it would be ‘helpful’ or ‘very helpful’ to perform RPS instead of STN.

Independent video review revealed that the intraclass correlation of our modified TRACS was 0.87 among all reviewers and 0.74–0.75 between the instructor and each video reviewer.

Discussion

Comparing RPS to STN

Whether they participated in RPS or STN designs, all final respondents (n=10) reported that they would prefer to perform simulations with an RPS design. This response, though answered hypothetically by the six respondents who were exposed to the STN design only, nevertheless, reflects the appeal of RPS among learners. In all other ways, we found RPS to be effective and comparable to STN. Residents in both groups felt strongly that simulation improved their confidence, knowledge and skills in acute resuscitation. Furthermore, both simulation as an instructional methodology and quality of the simulation experience were rated comparably highly in both RPS and STN groups. With regard to objective performance, minor differences emerged on comparing RPS and STN, but there were no differences in the magnitudes of improvement between groups at varying points in time.

Our final findings corroborate prior research demonstrating paediatric resident preference for RPS. However, in contrast to our study, Auerbach et al 7 reported benefits to RPS over STN, as paediatric residents performing RPS in their investigation reported higher overall debriefing quality, greater improvements in self-assessed knowledge and skills (but not confidence or performance) and were more likely to report that the simulation session was an excellent method of teaching than those performing STN.7

Differences in our findings may reflect true parity between RPS and STN but are more likely attributable to our small sample size. We originally targeted a sample size of 60 participants for 80% power to detect a 20% difference between RPS and STN groups. However, logistical constraints limited our sample size, with power to detect differences of 40% or greater between groups. While this effect size reflects the findings of Auerbach et al, smaller differences between RPS and STN groups were likely not captured in this educational cohort study.

Impact of the simulation curriculum

On considering the impact of the simulation curriculum as a whole (both RPS and STN groups), residents demonstrated significant improvements in their confidence and medical knowledge with each simulation session. Compared with baseline, these improvements were retained after 1–2 weeks and after 4–6 months. In addition to expected retentions in confidence and knowledge, we also found that residents performed significantly better on their second simulation (after 1–2 weeks), demonstrating short-term retention of objective performance gains.

These findings are consistent with prior research of paediatric high-fidelity simulation demonstrating significant short-term retention of simulation-based knowledge and skills. In their meta-analysis, Cheng et al 4 found significant effect sizes for medical knowledge and simulation performance on comparing studies of simulation-based training to no intervention. The positive effects on performance were retained across varying study features, including variable trial designs, patient age groups, simulation settings and participant characteristics.

In assessing retention of knowledge and skills, Mills et al found that paediatric residents performing 16 hours of high-fidelity simulation training retained a net 19% gain in cognitive knowledge and procedural competency and 16% gain in group resuscitation performance at a median follow-up of 1 year.14 Similar studies of intensive simulation-based mastery learning have demonstrated skills retention up to 12–14 months among Internal Medicine residents,15 16 whereas shorter (1–2 hours) sessions among paediatric residents resulted in linear decline of performance, with 92% retention at 2 months and 56% at 6 months.17 While this curriculum was not designed as mastery training, we noted overall retention of improvements through the 4–6 month follow-up assessment, suggesting that shorter sessions among paediatric residents may retain greater efficacy than previously suggested.

To assess resident performance, we utilised a modified, brief version of the TRACS comprising 40 skills pertinent to our learning scenario. By using our modified TRACS for the tool’s intended purpose of real-time assessment, we were able to demonstrate its reliability as a measure of objective performance. The high intraclass correlation between the instructor and each video reviewer also suggests that despite investigators’ dual roles as simulation instructions, their TRACS ratings did not suffer substantially from bias, and this might be an effective tool for use in simulation-based medical education.

Strengths and limitations

Our study’s internal validity was further strengthened by standardising the simulation scenario and debriefing and by ensuring that all instructors were trained in advocacy inquiry. Study outcomes—including all measures of attitudes, confidence, knowledge and skills—were identical throughout the study. Finally, survey instruments were adapted from prior research, PALS certification exams, and validated assessment tools.

Study limitations include variability in the location of performed simulations, as we changed from in-situ simulations to a dedicated simulation space. Additionally, after switching locations, we allowed instructors to use video review at their discretion. The location change was identical in timing for both groups, whereas video review may have introduced differential bias into the study design. Additionally, our study investigators served as simulation instructors, and thus were not blinded to group allocation. We were limited in our long-term follow-up assessment, and could not reassemble simulation teams at 4–6 months to perform a third simulation as originally desired. Despite incentives, only 1/3rd of residents completed the final survey. Lastly, we were not powered to detect small but potentially meaningful differences between STN and RPS groups. Similarly, due to our small sample size, we did not control for potential confounders in long-term outcomes such as PALS training or differential clinical exposures among trainees.

Future educational studies comparing STN and RPS may consider enrolling across multiple sites to increase power. Long-term assessments may maximise follow-up by capitalising on times when learners assemble, such as training-wide orientations or working retreats. Additionally, given the literature on simulation decay, long-term follow-up should extend beyond 1 year.

Conclusions

Simulation for paediatric residents leads to lasting improvements in confidence, knowledge and skills, with similar impacts of RPS and STN methodologies. The modified TRACS was a reliable instrument to measure objective individual resident performance in simulated resuscitations. Although larger studies are needed to determine whether RPS is a more effective format for adult learners as has been previously suggested, paediatric residents in our study reported they would prefer RPS to STN designs. Educators may consider incorporating RPS into paediatric simulation curricula, in keeping with emerging evidence and theories of adult learning.

Acknowledgments

We extend our deepest gratitude to Pamela Corey, Andrew Camerato and Elena Cotto at the Boston Medical Center Solomont Center for Clinical Simulation and Nursing Education for their assistance. We thank Dr Atsuko Koyama and Dr Megan Mickley for their review of the TRACS videos. We are indebted to Dr Daniel Tsze and Dr David Kessler for their statistical guidance and critical appraisal of this work.

References

Footnotes

  • Contributors The authors certify that STS is responsible for the original conception and design of this work, acquisition and analysis of data, writing the primary draft of this work and leading critical revisions to its content. KAP and RJV made substantial contributions to the design of the work, the acquisition and interpretation of data and critical revisions of its drafted content. All authors approve the final version submitted and are accountable for all aspects of the work and its integrity.

  • Funding This research was supported by an American Academy of Pediatrics Resident Research Grant.

  • Competing interests None declared.

  • Ethics approval Boston Medical Center Institutional Review Board.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.