Background Good communication in healthcare between professionals and between professionals and patients is important in delivering high-quality care. Evidence of translation of technical skills taught through simulation into the clinical environment has been demonstrated, but the evidence for the impact of communication skills is less well known.
Objectives To identify and critically appraise the evidence for the impact of communication taught through simulation-based education (SBE) and use this evidence to suggest a model for future SBE interventions for communication skills.
Study selection MEDLINE, CINAHL, EMBASE and PsycINFO were searched for articles pertaining to communication skills taught through simulation. A content expert was consulted to suggest additional studies. 1754 studies were initially screened for eligibility, with 274 abstracts screened further. 147 full-text articles were further assessed for eligibility, with 79 of these excluded. The remaining 68 studies were reviewed and 18 studies were included in the qualitative synthesis as studies designed to show benefits beyond the simulation centre.
Findings The 18 identified studies with an impact at a Kirkpatrick level of ≥3, are analysed; 4 looking specifically at communication between healthcare professionals and 14 looking at communication between health professionals and patients or relatives.
Conclusions There is some evidence that the improvements in communication taught through simulation can be translated into benefits measurable beyond the simulation centre, but this evidence is limited due to the way that most of the studies are designed. We suggest a model for SBE aimed at teaching communication skills that is informed by the current evidence and takes into account the need to collect higher-level outcome data.
- simulation training
- education medical
- education nursing
- patient simulation
Statistics from Altmetric.com
Good communication between healthcare professionals and between healthcare professionals and patients is important in delivering high-quality care. Effective communication between healthcare professionals and patients improves patients’ outcomes.1 Communication failure among healthcare professionals is reported to be the major contributing factor in >50% of ‘never-events’ involving patients.2 3
There is evidence that team training in human factors, including communication, can have a positive impact on patient outcomes.4 Simulation is an effective tool for team training and is now formally embedded in medical school curricula worldwide.5
Kirkpatrick’s model6 has been used widely to evaluate the impact of educational programmes. The model describes four levels of training evaluation: reactions, learning, behaviours and results, with the fourth level meaning patient-oriented outcomes in the context of healthcare. Although other evaluation models exist in current literature, Kirkpatrick’s model was used for its long-tested validity and translational ability in training programmes in medical education for decades. In simulation-based education (SBE), benefits have been mainly demonstrated at levels 1 and 2, while levels 3 and above have been mostly demonstrated on the acquisition of technical skills.7 8 As well as training in technical skills, McGaghie et al suggest that simulation is equally effective for teaching communication skills, not only between healthcare professionals but also with family because ‘the ability to engage a family in a difficult conversation about end-of-life issues is a clinical skill amenable to SBE just like inserting a chest tube’.9
There is limited research focusing on the impact of SBE on improving non-technical skills and on communication skills in particular. A few systematic reviews have looked at simulation training aimed at improving communication skills for healthcare professionals.10–12 The systematic reviews that have been conducted so far have summarised the outcomes of the use of simulation training in general and in skill acquisition for healthcare professionals, including communication. Despite the existence of guidelines and toolkits13 and thousands of SBE courses for communication across the world, we believe that there is a need for a search in the literature to identify models of best practice within simulation-based courses for enhancing communication skills. These models can then be applied and replicated in different centres that are interested in using simulation to improve communication among healthcare professionals and communication between healthcare professionals and patients. We conducted a literature review to identify what impact SBE has on improving communication skills using Kirkpatrick’s hierarchy. We wanted to identify best practice in this field to inform further research and guide current educators using simulation to deliver communication skills learning to obtain maximum impact through their SBE programmes. The primary objective of this study was to identify SBE models that have demonstrated a positive impact on communication skills of healthcare professionals, whether that refers to interprofessional communication skills or communication skills when interacting with patients, at a high level of Kirkpatrick’s hierarchy. Secondary objectives of the study were to identify good practice within the studies reviewed in order to formulate common good practice characteristics that simulation educators can incorporate in their courses to achieve positive results on educational outcomes. Key messages from this review are summarised in box 1.
Key messages from this review
The majority of simulation research currently appears to explore benefit only at Kirkpatrick levels 1–2a.
Future studies should be designed to look for impact beyond the simulation centre. The effect of repeat ‘dosing’ of simulation education observed in the technical skills literature is hinted at in the communication skills literature.
A well-designed communication skills simulation intervention should
consider the effects of repetition of a shorter intervention rather than a single longer intervention
collect outcome data using validated assessment tools
measure outcomes that are patient-oriented or in the clinical environment
measure outcomes after a ‘decay phase’ once the intervention has finished.
Ethical approval for this work was not sought as it did not directly involve any participants.
A systematic literature search was conducted with the help of three librarians in our institution. The search was performed in four databases, MEDLINE, EMBASE, CINAHL and PsycINFO, from first records to November 2016. The terms used were ‘simulation’ or ‘simulation training’ and ‘communication’’ The strategy used was to perform an individual search of each of the above databases, using a combination of the above terms in abstract or title. The detailed search strategy is provided in box 2. Limits were then applied in MEDLINE database, as specified in box 2. The strategy aimed to provide the best combination of specificity and sensitivity to identify relevant papers. The identified abstracts were manually screened by two researchers and after exclusion of ‘irrelevant’ abstracts (such as those relating to computer science literature), the ones identified as ‘relevant’ papers were read by two independent researchers. Inclusion criteria for this review allowed for inclusion of all study designs and involved any healthcare professionals at postgraduate or undergraduate level. Outcomes at any level of Kirkpatrick’s hierarchy were considered, but the stated aim for the review was to synthesise data from higher levels of Kirkpatrick’s hierarchy. Higher Kirkpatrick levels relate to behavioural change, changes in organisational practice or direct benefits to patients.6
Search strategy used within databases
(communication).ti AND (simu*).ti,ab 812 results
Limits applied [Document type Clinical Trial OR Comparative Study OR Conference Paper OR Consensus Development Conference OR Consensus Development Conference, Nih OR Controlled Clinical Trial OR Editorial OR Evaluation Studies OR Introductory Journal Article OR Journal Article OR Meta-analysis OR Multicenter Study OR Observational Study OR Pragmatic Clinical Trial OR Randomized Controlled Trial OR Report OR Validation Studies] 807 results
A further search using exploded MeSH headings yielded 460 further results, none of which met the inclusion criteria:
(COMMUNICATION/ AND exp "SIMULATION TRAINING"/) NOT ((communication).ti AND (simul*).ti,ab) 460 results
exp”SIMULATION TRAINING”/ AND exp COMMUNICATION/ 131 results
*COMMUNICATION/ AND exp SIMULATIONS/267 results
(simul*).ti,ab AND (communication).ti 549 results
Educational interventions were included if they were related to the teaching of communication skills either in the context of interprofessional communication or communication between professionals and patients/carers. It has been argued that interprofessional communication and communication between healthcare professionals and patients belong to different communication skill domains.13 For the purpose of this study, where the aim was to identify models that can work and be replicated by simulation educators teaching communication as a skill regardless of the domain, both types of communication were studied, and neither skill was excluded from the literature search. The two domains were considered separately during the qualitative synthesis, however. The search did not exclude papers based on country, language of publication or date of study. Papers related to simulation outside the healthcare setting were excluded. Studies for which communication skills were not the focus or studies in which simulation was used as a validation tool, rather than as part of the educational intervention, were also excluded. For example, numerous identified studies that taught communication skills in seminars with small groups and assessed candidates in a simulated setting before and after were also excluded as they did not add to the value of the outcomes. Many of these studies also did not inform the reader of the impact of SBE on enhancing candidates’ communication skills.
Citations were reviewed by AB and EK, with papers unrelated to the review topic excluded. A list of potentially relevant papers was exported and the abstracts were divided between AB, EK and MP. The abstracts were then reviewed for possible inclusion, as described above. Those which met the inclusion criteria were then obtained in full-text form and assessed. If clarification was needed during the review of abstracts, arising from a possible disagreement between the reviewers, MP was consulted for a final decision of inclusion or exclusion. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses model was used throughout this process and a flow chart of the results is provided in figure 1 . The quality of each of the studies included in the final study was assessed and coded using the BEME coding sheet.14 Each study was coded according to its Kirkpatrick level and BEME strength of association. Formal training in the use of the BEME coding sheet was not undertaken by AB and EK, but MP has used the process during previous reviews and guided the other reviewers. The group met to discuss the first five papers they had coded to improve inter-rater reliability and MP acted as an arbiter in the event of doubt or disagreement. Regular meetings were scheduled to discuss any issues in coding.
Studies identified following the search were divided into two categories according to the communication skills taught: interprofessional communication and communication between healthcare professionals and patients. Outcomes from the papers included in the coding process were synthesised by the researchers going through each column of the BEME coding sheet, trying to identify common themes in the studies achieving higher levels of Kirkpatrick’s model. The synthesis took place in two parts. An initial process of individual search for common themes, trends and outcomes with note keeping was followed by a group meeting with discussion on the individual observations and conclusions. After this stage, a common decision guide was established for the three researchers, recorded in the discussion minutes and distributed among the members of the team. Throughout the process, when differences in interpretation of results occurred between the researchers, a meeting was called with MP who acted as a final adjudicator. The papers in question were then re-reviewed and reanalysed in order to minimise heterogeneity in outcome interpretation and resolve inconsistencies. After finalising the manuscript’s results and conclusions, a meeting was arranged where the results and conclusions were audited against the initial aims set by the authors when conducting the initial literature search. During the synthesis process, we incorporated models of systematic review writing suggested by experts in the educational field and mapped our steps on the STORIES statement for systematic reviews in education.15
The initial search identified 1754 citations, which were read by AB and EK. Many were from a non-healthcare setting, and after screening for possible relevance, 274 articles remained. The abstracts for the 274 articles were exported and divided between AB, EK and MP. The earliest exported paper was from 1977. A further 127 abstracts were excluded at this stage and 147 full-text manuscripts were examined.
The 147 full-text articles were reviewed and a further 79 were excluded as not relevant. The remaining 68 articles were categorised using the BEME coding sheet and the summary of the characteristics is included in online supplementary table S1. The earliest included article was from 1997.
Supplementary file 1
There was significant heterogeneity between the studies both in terms of participants and methodology. Of the 68 articles, 45 described educational interventions aimed at postgraduate healthcare professionals and 28 were for undergraduates. In total, 35 were for candidates of a medical background, 14 for nursing and 19 were for multidisciplinary candidates. Of the articles focusing on postgraduates, 27 involved doctors, 5 involved nurses and 13 were multidisciplinary.
Of the 68 articles, 39 had educational interventions that used simulation as part of a wider programme of lectures, didactic sessions and small group teaching. The remaining 29 articles had interventions that used simulation with debrief alone.
Of the papers included in this review, the majority (50) of the studies were, by design, only able to demonstrate impact at a Kirkpatrick level of 1–2b.
There were 18 papers looking for an impact at a Kirkpatrick level of ≥3, with 4 looking specifically at communication between healthcare professionals and 14 looking at communication between health professionals and patients or relatives. Although communication skills would appear transferable, previous published works13 have delineated the different competencies required for interprofessional communication compared with communication between health professionals and patients. With this in mind, we will divide the papers along these lines.
Health professional–patient communication
The 14 papers looking at communication between health professionals and patients or relatives had varying BEME scores for strength of conclusions, which are summarised in online supplementary table S2. One paper16 achieved a BEME score of 1 (‘no clear conclusions can be drawn’) while five17–21 achieved a score of 2 (‘results ambiguous but there appears to be a trend’). Six papers22–28 achieved a score of 3 (‘conclusions can probably be based on the results’) and two29 30 achieved a score of 4 (‘results are clear, likely to be true’).
Supplementary file 2
One of the papers with a BEME score of 41 showed significantly improved communication scores on directly observed procedural skill in real patients following a virtual reality simulation-based intervention for endoscopists. The other paper with a BEME score of 41 was a large randomised trial of doctors and nurses which was negative for its primary endpoint of patient-reported quality of communication and actually showed a small worsening in depressive symptoms among the intervention group. This difference, however, could be accounted for by a difference in the seniority of participants between the intervention and control groups. The most senior clinicians had patients with the lowest depression scores and were over-represented in the control group versus the intervention group.
The use of patient and family member questionnaires was a recurring theme throughout the health professional– patient communication articles, with seven articles in total17 24–29 using this approach to assess participant’s postintervention communication skills. Sullivan et al showed that their SBE intervention for internal medicine residents looking after intensive care unit (ICU) patients had a significant benefit for the relatives of those patients, thus reaching Kirkpatrick level 4b. This effect was only demonstrated when the candidate had attended more than one session. Mitchell et al measured patient’s satisfaction with anaesthetic residents’ communication before and after a SBE intervention and found a modest but statistically significant improvement. Shaw et al describe an improvement in communication between ICU staff and families following a simulation and classroom-based intervention. This improvement however only reached statistical significance in one subcategory of the family questionnaire. Kruijver et al describe a case-controlled trial to investigate whether a simulation-based intervention improved ‘affective and instrumental’ communication between nurses and patients using a patient questionnaire. Their study found no difference between the groups.
Another approach to assessment of effect postintervention was to use observation of participants by independent raters using checklists or standardised scales. Five papers described this approach in interactions with real patients16 19 23 24 30 and three described independent observation in the simulated environment.18 22 28 Fujimori described the use of blinded observers with standardised checklists watching videotaped real consultations to demonstrate an improvement in the intervention group following a simulation-based communication skills course. Hsu showed an improvement in OCSE scores among qualified nurses following their simulation-based intervention, but the improvement became non-significant at subsequent follow-up. Interestingly this was because of an improvement in the control group, rather than deterioration in the intervention group.
One paper in the review used multiple measures of effect, explicitly linked to Kirkpatrick’s hierarchy.28 Zabar et al collected level 1 and 2b data with participant questionnaires recording attitudes towards their intervention, which was a combination of simulation using simulated patients and small group teaching sessions on taking drug, alcohol and sexual histories. Kirkpatrick level 3 data were collected using independent raters in OSCE stations assessing communication and a chart review of real patients to see if the intervention resulted in more detailed screening for sexual histories and screening for drug and alcohol use. They attempted to collect Kirkpatrick level 4 data using a standard patient satisfaction survey, but there was no improvement from an already high baseline.
There were four papers looking at interventions to improve communication between health professionals that were designed to assess impact at Kirkpatrick level 3 and above;31–34 the characteristics of these papers are summarised in online supplementary table S3. Three of the papers achieved a BEME score of 3.1 Dadiz et al demonstrated improved communication between paediatric and obstetric residents measured on a standardised scale by independent observers in the simulation centre setting. They also showed an improvement in self-reported communication skills in the clinical environment. Paull et al described improved scores for communication descriptors on standardised scales following a simulation-based CRM intervention. These effects were assessed within the simulation centre. Weller et al also used a behavioural marker risk index score, which included communication elements, to demonstrate improved communication skills following their simulation interventions. These observations were performed in the clinical environment and were maintained after the simulation programme finished.
Supplementary file 3
One of the interprofessional communication papers achieved a BEME score of 4.1 It described improvements in communication between members of real trauma teams measured using standardised scales. The intervention consisted of regular in situ trauma team simulation with debrief focusing on team communication. The study demonstrated a statistically significant improvement in team communication on a validated scale from baseline once regular in situ simulation was running. The effect was only sustained for a short while (5 weeks) once the in situ exercises ceased. After 5 weeks, the team communication returned to baseline.
One of the themes that emerged from the four papers looking at interprofessional communication was the use of simulators as opposed to simulated patients, in contrast to the health professional–patient communication programmes which almost all used simulated patients.
Three of the four interprofessional communication programmes involved repeat sessions and the one that showed the biggest real-world impact32 also showed a decline to baseline after the regular sessions had finished. That programme was also the only one in our review to deliver the training in situ instead of training in the simulation centre then looking for an impact in the clinical setting.
Our review demonstrated the wide body of research into SBE interventions aimed at improving communication skills. This review found that the majority of research in this area continues to be at Kirkpatrick levels 1–2a (see online supplementary table S1), with many of the studies reporting qualitative data only. The studies in this review that did attempt to look for transfer of skills (Kirkpatrick 3–4) varied considerably in their approach.
Generally, studies that looked at interprofessional communication31–34 used checklists and independent raters and those addressing communication with patients used either independent observation16 18 19 22–24 28 30 or patient questionnaires.17 20 24–29 Where these checklists were used, there were clearly stated efforts to reduce measurement errors and assess inter-rater reliability, in keeping with best practice.35 Studies that looked at clinician communication with patients or family used questionnaires filled in by those patients or relatives. The questionnaires in these studies were generally of good quality and the description of their use was clear and transparent, in line with good practice.36
Many of the studies22 23 25 27 31–34 used a before and after design rather than a study and control group. This was a necessary concession for the studies looking at interprofessional communication in the clinical setting. It may be more reasonable to use a comparative study design when studying the communication between a single clinician and one patient/carer.
One of only three studies to show an objectively measured significant improvement in real-world communication skills following a simulation intervention also showed a regression to baseline 5 weeks after the regular simulation interventions ceased.32 Another study that showed an improvement in relatives’ perception of the communication skills of participants following a simulation intervention only did so after the participants had attended multiple sessions.27 This highlights a potential benefit in repeat ‘dosing’ of simulation interventions and the limitations of what a single session can achieve.37–39 Disappointingly, two studies20 29 which included patient-focused outcomes in their secondary outcomes that looked at depression and anxiety scores pre intervention and post intervention demonstrated a worsening of symptoms in the groups exposed to the simulation intervention. These worsened outcomes, however, can be accounted for by baseline differences between the groups.
A number of studies of technical skills taught through simulation37–40 have shown that repeat ‘dosing’ of the educational intervention is required in order to achieve a sustained effect. Some of the studies in this review27 32 have hinted at the existence of this phenomenon in communication skills taught through simulation. Future research should consider a review of the effectiveness of the intervention after allowing for a ‘decay phase’.32 SBE programmes designed to teach simulation should consider modelling their programmes on multiple shorter sessions at discrete intervals rather than one long session.
It was interesting to note that only one of the studies in this review employed in situ simulation in developing communication skills and this study was one of the few that showed a significant improvement at Kirkpatrick level 3. There is growing evidence that in situ simulation is a powerful tool to embed learning in the workplace,41 and we believe that this tool will see further growth.
There is a paucity of information on how to design a simulation intervention which can produce benefits at Kirkpatrick level 3 and above. Despite this paucity of data in our review of the literature, some key findings emerged to help guide simulation-based communication skills training to produce such higher-level benefits (see box 1). Consideration should be given to designing and delivering shorter sessions with a higher frequency. Within its design would be careful consideration of how the outcomes of the intervention are to be measured, with the use of validated and reliable assessment tools by independent raters. The measured outcomes should be patient-oriented and extend beyond the simulation centre into the clinical environment. Outcomes should be reassessed after a period of time to allow for degradation of acquired skills following a ‘decay phase’.
Our review raises the challenging question of whether journals should continue to publish simulation research that is designed to capture benefit only at Kirkpatrick levels 1–2a. These descriptive studies with low impact gather data that is important to refine the programmes locally, but add little to the wider body of evidence. Published works10 42–44 have suggested that future simulation research should aim for higher-level evidence. Studies in future should be designed to look for impact beyond the simulation centre and a benefit in the clinical setting. This will become more important as squeezed budgets require more proof of a return on investment.42 44 There may be a need for a significant initial investment in these studies to power them adequately enough to arrive at valid and reliable conclusions.
Limitations of this review
The topics of communication and simulation when used as search terms returned a large number of non-healthcare-related articles. Attempting to refine these further resulted in a large number of relevant studies being omitted from the search results. This meant that the search terms had to be kept broad and the citations reviewed manually. This may have resulted in some studies being erroneously excluded by our reviewers at the first stage due to the laborious nature of the task. The degree of heterogeneity in both study design and outcome measure precluded any attempt at meta-analysis. The decision to include studies involving communication between clinicians and patients as well as interprofessional communication further increased this heterogeneity. Even within these two distinct areas, there was significant heterogeneity between interventions that prevented pooling of data.
More robust research into simulation-based communication skills training is required, with particular effort to focus attention on the effects of this training beyond Kirkpatrick level 2a. There is some evidence that a well-designed simulation intervention to teach communication skills can demonstrate benefit at Kirkpatrick level 3 although more of these studies are needed to prove this benefit.
Contributors MP suggested the theme of the review as well as acting as a subject expert in matters of arbitration between reviewers. She also suggested additional papers for consideration and edited the manuscript prior to submission. AB undertook the literature search and reviewed and coded individual articles along with EK. He wrote the majority of the manuscript and made changes suggested by the coauthors at each stage of the draft process. EK undertook the literature search and reviewed and coded individual articles. She also wrote the introduction section of the manuscript.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.