Background Simulated learning environments are increasingly common in interprofessional education (IPE). While reflection is key to simulated learning, little is known about the nature of these conversations during simulation. The aim of this exploratory paper was to quantify communicative features of conversations during interprofessional simulation scenarios between dietetics students, speech-language therapy students and their educators.
Methods Conversations between students and educators during the pauses between simulated scenario phases were recorded and transcribed. Student and educator utterances were quantitatively analysed for speech acts, question types and elements of IPE (clinical reasoning, roles and responsibilities, client and family centred care, interprofessional collaboration, clinical procedural tasks).
Results Across 1340 utterances from six scenarios, analyses of conversational speech acts and question types highlighted similar patterns of usage between two educators despite different clinical scenarios and professional backgrounds. Educators used a minimally higher proportion of open compared with closed questions, and higher-level problem-solving questions predominated in comparison to simple factual questioning. Educators used more requests for action and attention and students displayed more performative and responsive acts (p<0.05). Students were exposed to all elements of IPE through conversations in all scenarios.
Conclusions Conversations during pauses in immersive simulated scenarios between educators and students enable rich IPE opportunities and higher-level problem-solving. Educators encouraged students to problem solve within and across disciplines with open questions. Educators provided few factual responses to questions themselves rather diverting questions back to the students. This approach to the analysis of conversation can support educators to evaluate their own communication during interprofessional simulations.
- interprofessional education
- simulation based learning
- allied health
Statistics from Altmetric.com
- interprofessional education
- simulation based learning
- allied health
The importance of debrief in simulated learning activities is well-established in the literature—providing opportunities for enhanced student learning.1–4 The literature describes different types of debrief: in-simulation, postsimulation, self-facilitated and video-assisted simulation.1 5 The essence of how to structure a debrief is outlined in the literature, and several researchers have proposed different frameworks.6–8 Despite this wealth of simulation research, there has been little focus on the conversations during debrief phases of simulation.9 While reflective strategies used by educators during classroom teaching, clinical practicum and simulation are well described in literature,2 10 surprisingly little specific detail has been provided on simulation debriefing. One exception is a recent paper that provides examples of utterances educators might use during postsimulation debriefing.11 The authors warn of the potential harm to learners of difficult debriefing situations and offer a toolbox of strategies to guide the educator.
The philosophy behind debriefing is engagement of students’ critical reasoning. Snyder and Snyder highlight the role an educator plays in supporting the development of students’ critical thinking skills by using comments and questions.12 They propose that educator strategies need to support students to analyse, synthesise, evaluate and reflect on their knowledge and decision-making process.12 13 These strategies have been characterised as speech acts—acts of language used to produce an effect in a listener. Astute use of these acts enable the educator to engage the student’s higher-order thinking.14 15 O’Conner and Michaels expand on this educational concept by describing the nature of the interactions between educator and student.16 They describe an optimal conversation exchange as being ‘ideologically dialogic’ that occurs when there are equal social relationships, intellectual openness and opportunities for creative thought.16 This pattern of discourse is detailed by Mortimer and Scott who describe the initiation, response, feedback (IRF) framework.17 Responses are chained together to form initiation, response, feedback, response, feedback loops, in which the educator will repeat the student’s comment which would trigger the student to elaborate. Deeper analysis of these discourse acts has potential to enable educators to explore students’ thinking processes and understand the effect of educator facilitation on students’ thinking.18
The aim of this exploratory paper was to examine quantitatively the conversations during two interprofessional simulated case scenarios in a cohort of 30 students. Dietetics and speech-language therapy students participated in small group debriefing sessions at intervals throughout immersive simulated scenarios. We tallied the speech acts of the clinical educators, during the in-simulation phases, in order to explore what happens during this learning activity. The wider research programme intends to eventually answer the question: how do clinical educators, involved in interprofessional simulation, optimally facilitate students to develop their clinical reasoning and other skills expected of interprofessional collaborative practitioners? We trialled a method of providing a detailed quantitative description of educator–student interactions in interprofessional simulated healthcare scenarios and explored the way in which they may contribute to learning. Additionally, we deconstructed types of speech acts within simulation learning conversations to explore further the possibilities of this type of analysis.
This prospective observational study received appropriate national ethics approval (UAHPEC 012794). All students provided written consent for use of their recorded scenarios.
Thirty students (10 dietetics (DT); 20 speech and language therapy (SLT)), predominantly female (87%), participated in the study. All students were in the first year of their 2 year masters qualifying programmes. No student had been on a hospital clinical placement. Seven female faculty members (aged 34–44 years) participated, three SLT and four DT.
Simulation-based learning environment
As part of the students’ overall clinical programme, they completed a series of interprofessional sessions involving case-based learning, part-task procedural skill training and finally, simulated clinical scenarios. In this last session, students worked through two simulated hospital ward-based scenarios with low-fidelity manikins, high-fidelity manikins and confederates (trained educators who perform the role of hospital staff or family members) (table 1). Part-task training and scenarios were designed to provide learning opportunities that aligned with the IP framework devised previously by the research team.19 Students were divided into three interprofessional groups. Two experienced educators led one scenario each on three consecutive occasions on the same day with the three student groups rotating through each scenario. Five additional faculty members were involved as educators and confederates. The two educators (one SLT, one DT) selected to facilitate the simulation session had been teaching dysphagia (swallowing disorders) assessment and management in an interprofessional context for over 3 years. All educators had participated in a briefing on educational principles of simulation-based learning, including how to brief and debrief learners throughout the simulation (table 1).
An in-simulation ‘pause and discuss’ approach to debriefing was used where the educator or student could pause the scenario at any time in order to discuss clinical decisions and receive feedback from peers and educator.1 20 Scenarios were split into phases at natural pauses in each assessment (table 1). The scenario was ‘paused’ by the educator at the end of each phase. This allowed all students to rotate from observer role to active role in the simulation room and allowed time for reflective discussion. Each scenario was allocated 60 min, before the students rotated to the alternative scenario. Each educator audio recorded the discussions during the ‘pauses’ in the simulated scenario resulting in a total of 5 hours of recorded material for analysis.
Transcription and coding
Two independent research assistants (ABy and YC) transcribed the audio recordings of each ‘pause’ section of the scenarios verbatim. Utterances were coded during transcription as either educator (E) or student (S) with individual student speakers not identified. Transcription began from the first recorded words of each ‘pause’ section and stopped when the educator indicated that the students should return to role in the scenario. Occasionally, students were talking to each other in smaller groups without the educator, or multiple discussions were happening and accurate transcription was not possible. These sections were transcribed as timed student conversation (s). The educators checked the transcripts for accuracy. The research team read the transcribed conversations between 2 and 6 times each to ensure correctness and understanding.
For the purpose of analysis, the data was quantified. All utterances were imported into a spreadsheet and then coded for speech acts, question types and interprofessional education (IPE) content. Speech act analysis has a long history in the field of linguistics. Each utterance is considered as an action, particularly with regard to its intention, purpose or effect. Fey devised his interpretation of speech acts for assessing conversational assertiveness and responsiveness in young children,14 but it is equally relevant in a tertiary education, adult-learning context. Codes are detailed in table 2. Each utterance was given its own code even where requests and responses were spread over more than one utterance. If the intention of the utterance was not clear in the transcript, the audio recording was referred to for more information. Different question types used educators were coded according to The Question Category System for Science21 (table 3). Utterances were also coded for interprofessional content (clinical reasoning, roles and responsibilities, client and family centred care, interprofessional collaboration, clinical procedural tasks) (table 4).19 Only the educators’ utterances were coded in relation to the types of questions used in view of the research question focus on educator behaviour. All transcripts were double coded independently by AM, and ABs or BJ and any differences in opinion were discussed before a final agreement on coding was reached.
Statistical analyses were completed using SPSS V.22. Patterns within the data were established through descriptive statistics and graphing. Differences in frequency of speech acts between students and educators, and differences in frequency of question types between scenarios and within the three repetitions of the same scenarios were calculated using one way repeated measures Analysis of Variance (ANOVA). A p<0.05 was considered statistically significant for all analyses. Illustrative examples of conversations are used to demonstrate speech acts and question types as well as how interprofessional learning opportunities arose.
Frequency of speech acts for educators and students are displayed in table 5 with illustrative examples of each act displayed in table 2. Educators and students differed significantly in speech acts (assertive acts: Request for Action (RQAC) (Fr 54.52, p<0.001), and Request for Attention (RQAT) (Fr 54.52, p<0.05); as well as responsive acts: Performative acts, for example, reading aloud (Perf) (Fr 8.00, p<0.05), Appropriate response to Request for Information (RSIN) (Fr 22.22, p<0.001), Appropriate Response to a Statement (RSAS) (Fr 5.71, p<0.05) and Imitation (IMI) (Fr 5.71, p<0.05)). Educators used more requests for action and attention and students used more performative and responsive acts.
Figure 1 shows frequency of educator question types, with illustrative examples displayed in table 3. Both educators used a slightly higher proportion of open versus closed questions, and higher-level problem-solving questions predominated in comparison to simple factual questioning. Affective dialogues started by the educator were infrequent and all examples occurred immediately after a pause in the scenario and were directed at the students who had been active in the scenario (table 6). The affective question did not always generate an affective response, as seen in table 6, child 2, phase IV. Likewise, students did also spontaneously comment on their feelings without being directly asked on two occasions. There were no significant differences in frequency of any individual speech act or question type between the three student groups with the same educator/scenario (p>0.05) (figure 1). There was only one significant difference in the frequency of speech acts and question types between the two educators; with the overall number of assertive acts between educators with educator 1 performing 249: 91 assertive acts: responsive acts compared with educator 2 who performed 157: 44 (t=2.84, p<0.05).
Table 4 displays descriptions of IPE opportunities that were identifiable from the conversations. This includes an example of an educator-led IRF dialogue used to determine the role of the SLT and DT in child scenario 1 and a conversation between two students to establish the next procedural steps in adult scenario 3.
This novel exploratory paper examined ‘pause and reflect’ conversations during interprofessional simulated case scenarios using several methods for analysing communication. Conversations between educators and students enabled rich IPE opportunities and higher-level problem-solving. Educators encouraged students to discuss, question and problem solve among themselves with open questions and few factual responses. Educators regularly diverted questions back to the students and encouraged the two disciplines to learn from each other. Despite the structured nature of the educational setting, interprofessional student conversations without educator contributions were common.
Educator conversation style
Not surprisingly for the setting, educators made more requests and students provided more responses. Where students asked factual questions of educators, educators frequently redirected the question back to the students to answer. This ‘reflective toss’ has been described as a successful strategy for moving responsibility back to students to encourage collaboration or independent decision-making.22 This strategy develops clinical reasoning by stimulating their ability to reflect-in-action for future action.15 23
Mortimer and Scott’s IRF structure was prevalent throughout the conversations in this setting.17 It has been argued that IRF patterns support the co-construction of cultural knowledge because the educator has the ultimate responsibility to repair any misunderstandings.24 This strategy supports the development of the students’ clinical reasoning and decision-making in line with the expectations and frameworks of the educator. Interestingly, students did not ask a large number of questions and educators often read aloud the written information (performative) rather than giving this task to the students. It could perhaps be concluded that educators took an assertive role with these novice students during these time-pressured education sessions. However, despite the prevalence of this IRF pattern in general, students’ assertive speech acts were more frequent than the responsive acts, highlighting that the students were willing to offer ideas and ask questions beyond that expected in simple response to educator questions. This observation aligns with a student centric model of learning in which the students are encouraged to question and comment rather than only responding to the questions asked by the educators.13 Questions were predominantly open and focused on higher-level problem-solving rather than solely fact finding allowing students to reflect on interprofessional practice and clinical decision-making rather than prove theoretical knowledge. Educators should take all opportunities to encourage student participation and control. Ensuring all reading aloud is conducted by-students-to-students is an easy way to ensure students talk more than their educators.
Between scenario/educator differences
The frequency of individual speech acts and questions was not significantly different between student groups, educators/scenarios or across scenarios. The only difference found was in the overall number of assertive acts. Educator 1 facilitating the child scenario provided ~100 more assertive acts compared with the educator 2 facilitating the adult scenario. There is no way to conclude whether this represents a difference in educator style or relates to differences in complexity of scenarios. The transcripts, however, suggest different levels of student knowledge between scenarios with a need for the educator to provide more factual information during the child scenario. The educator in the adult scenario provided less information as the students were more able to respond adequately to the educator’s reflective toss. An illustrative example was the educator’s direction of dietetic students to describe to the SLT students the clinical decision-making for designing a feeding plan and the SLT students to explain their reasoning behind the need for nil-by-mouth in the clinical context. This may be reflective of the theoretical content of the students’ clinical programmes prior to the interprofessional series. This may be a useful observation for educators. It suggests that educators need to consider the impact of scenario complexity when designing simulation education with more complex or unfamiliar content leading to more reliance on educator knowledge-sharing and directive facilitation, and more basic or familiar content, leading to more independent interprofessional learning and a coaching approach.10 25
Reflection within simulation
Reflection is a cornerstone feature of learning and has been described as a critical element of any simulation activity.5 7 26–28 Educators, in these within-simulation debriefs, were directive in their questioning, particularly seeking decisions in relation to planning what to do next, rather than asking rhetorical, effective or philosophical questions. The conversations focused heavily on preparing for the next phase of the time-limited simulated scenario. It could be assumed that this type of debrief would be representative of a reflection-for-action model.29 The predominance of request of information and response to requests as well as higher-level problem-solving questions verifies this. This fast-paced within-simulation reflection time would perhaps not be expected to offer the time or emotional space required to stimulate more affective responses. Interestingly, educators clearly did not see this a primary focus of the conversations with minimal affective questions from educators in any of the six scenarios. Despite this, students did offer affective comments immediately on exiting their immersive roles suggesting that they felt comfortable to discuss their feelings irrespective of the lack of request for them from educators.
Postsimulation reflections, in a safe environment allows students to express their reactions to the immersive experience more freely. This is known to enhance student learning and overall clinical skill development.8 In an analysis of 24 videoed debrief sessions from resuscitation simulations with nurses, researchers found students did not progress in their critical thinking to the final stage of reflection, that is, ‘What will I do differently next time?’ without the right questions from educators.2 The authors recommended that simulation educators would benefit from training in reflection in order to support students to achieve a complete learning experience. With this in mind, data in the current study suggest that both within simulation reflection-for-action and postsimulation reflection-for-learning is needed to maximise student learning. Simulated education should be designed to give time for postsimulation debriefing in addition to the pause and reflect model within simulation.
In this preliminary study, it was not our intention to evaluate how ‘good’ conversations were, for example, did a specific educator speech act or question lead to ‘better student reasoning.’ The analysis of the interaction between educator question type and student response would be an interesting and useful next step in the research programme. It would be interesting to further consider the types of questions asked by students and how the environment—physical, structural and linguistic, encouraged assertive communication from the students.
There are many ways to analyse conversations and language use in a classroom. We have focused our study on specific details of spoken language with an emphasis on educator questions. In doing so, we have to an extent, simplified the relationship between the context, the whole discourse and parts of the discourse. Non-verbal communication such as gestures, facial expressions were not included in our analysis. Unspoken clinical reasoning and IPE opportunities cannot be quantified.
The educator held the microphone and occasionally, students were talking to each other in smaller groups without the educator, or multiple discussions were happening. Accurate transcription was unfortunately not possible. The actual number of speech acts by the students is greater than that recorded in the transcripts. Counts of speech acts are therefore provided only as a guide for further benchmarking.
Originally, the intention was to code speech acts with established principles of IPE (clinical reasoning, roles and responsibilities, client and family centred care, interprofessional collaboration, clinical procedural tasks) in order to quantify the interprofessional experience of students.19 However, during analysis, this proved difficult with many conversations considered by the authors to cover more than one IPE principle and difficulties coding where conversations started and ended in regards to principles. The authors therefore chose to provide illustrative examples of the rich IPE opportunities gained during the simulations rather than quantifying or tallying. Further research on the IPE opportunities during simulation reflections is needed.
Conversations during pauses in immersive simulated scenarios between educators and students enable rich IPE opportunities and higher-level problem-solving. Educators encouraged students to problem solve among themselves with open questions. Educators rarely offered new knowledge themselves and regularly diverted questions back to the students through their collective knowledge. This analytic approach offers readers methods of recognising and developing their language and communication skills among their faculty with the intention of enhancing the learning outcomes of simulation-based learning. These data can support educators designing interprofessional simulations in the future.
We acknowledge You Chunzi for her contribution to the transcription coding and analysis of this work. We would like to acknowledge the students and educators who participated in this training as well as the staff at The University of Auckland Clinical Skills Centre for their support.
Contributors BNJ, PF, AM, JS and AB planned, conducted and reported the work. AB also contributed substantially to the analysis and reporting of the work. All these contributors gave their final approval of the version to be published and agree to be accountable for the accuracy and integrity of the work. AB and You Chunzi transcribed data, coded and analysed data.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.