Article Text

PDF

Certification, accreditation and professional standards: striving to define competency, a response to ASPiH Standards for Simulation-Based Education: Process of Consultation, Design and Implementation
  1. Carrie A Bohnert1,2,
  2. Karen L Lewis2,3
  1. 1 Office of Undergraduate Medical Education, Standardized Patient Program, University of Louisville School of Medicine, Louisville, Kentucky, USA
  2. 2 Standards of Practice Committee, Association of Standardized Patient Educators, Altamonte Springs, Florida, USA
  3. 3 Clinical Learning and Simulation Skills Center, George Washington University School of Medicine and Health Sciences, Washington, District of Columbia, USA
  1. Correspondence to Carrie A Bohnert, Office of Undergraduate Medical Education, Standardized Patient Program, University of Louisville School of Medicine, Louisville, KY 40202, USA; carrie.bohnert{at}louisville.edu

Statistics from Altmetric.com

Introduction

For the past decade, competency has been a consistent theme in medical education literature. Authors have pondered its definition, questioned appropriate assessment of competency and wrestled with how to certify competency.1 Barriers to implementing competency-based education and assessment are well documented.2–9 Through their various lenses, these authors have grappled with the complexity of codifying and assessing competent practice.

Among the healthcare simulation community, similar work has emerged. Many of the world’s largest and most prominent simulation societies and associations have spent much of the past decade codifying and determining appropriate assessment for our own competent practice. Associations have crafted professional standards, certification exams and accreditation processes after careful and deliberate codification of competent practice in the simulation of patient care. The simultaneous rise of these efforts is fortuitous, as simulation can provide reliable, consistent, valid and predictable means to assess competent practice.

Developing competency-based education and assessment programmes and defining professional standards have several common practices: identifying desirable outcomes, analysing the observable behaviours and attributes that lead to those outcomes and translating those ideas into clear and concise statements. At the heart of each of these initiatives is our concern for patient safety, desire for efficiency and respect for evidence-based teaching and assessment practices.

Standards

In 2017, the Association for Simulated Practice in Healthcare (ASPiH) published Simulation-Based Education in Healthcare: Standards Framework and Guidance.10 With this publication, they join a list of eight other professional societies and networks identified by Nestel and colleagues as engaged in professionalising simulation practice11:

  • ASPIRE at Association for Medical Education in Europe.12

  • Society in Europe for Simulation Applied to Medicine.13

  • Royal College of Physicians and Surgeons of Canada.14

  • American College of Surgeons.15

  • International Pediatric Simulation Society.16

  • International Nursing Association for Clinical Simulation and Learning (INACSL).17 18

  • Society for Simulation in Healthcare (SSH).19 20

  • Association of Standardized Patient Educators (ASPE).21 22

These groups are characterised by clinical orientation, professional discipline or simulation modality. Their intended audiences cluster into two groups: simulation facilities or simulation practitioners. ASPiH identifies its audience as ‘healthcare professionals involved in SBE’ (simulation-based education) and the standards are designed to help them provide quality assurance and improve the delivery of SBE.10 This differs from the standards of some other groups, which speak to the practices of a facility or administrator rather than a healthcare practitioner.

Methodologies

Following the release of ASPiH’s Standards Framework and Guidelines, Makani  Purva and Jane  Nicklin published ASPiH Standards for Simulation-Based Education: Process of Consultation, Design and Implementation in this journal.23 In this article, they describe the process of developing the standards over a 2-year period. The transparency afforded by this article allows other organisations to study and learn from ASPiH’s experience developing standards, which may result in improved methodology throughout the field of SBE.

There is a wide range of methodologies used by these organisations when creating their standards. ASPE, for instance, used a modified Delphi process, SSH relied on a practice analysis, and INACSL focused heavily on evidence-based literature.18–20 22 ASPiH chose to follow an implementation science framework, consisting of ‘exploration and adoption, program installation, initial implementation, full operation, innovation and sustainability.’23 The early stages of ASPiH’s process are similar to those of other organisations; however, their latter stages set them apart. ASPiH recognised the need for political and financial support and developed relationships with stakeholders in education such as Health Education England (HEE). Partnering with HEE and others enabled ASPiH to pilot the standards at universities, colleges, National Health Service trusts and/or centres as well as conduct online surveys and engage with practitioners over the phone and at meetings, forums and exhibitions. By following an implementation science framework, they tested the usability of the standards before publishing them. Other organisations used methods that tested the standards only after publication.

Specificity

In addition to utilising differing methodologies, organisations show differing levels of specificity within their standards. Being overly broad may not provide enough guidance for users, while being overly specific may force standards out of the range of practical application. A comparison between three societies provides an example of varying degrees of specificity (table 1).

Table 1

Standards on psychological safety

Each organisation addresses the same concept here: psychological safety. The specificity, however, ranges from broad to granular between organisations. ASPiH’s broad approach states only that the learner’s psychological safety has to be considered and supported. ASPE urges simulationists to develop their own policies and procedures for safety without prescribing the content of those policies and procedures. INACSL’s approach is to delineate some of those policies and procedures.

Purva and Nicklin acknowledge this dilemma. They report that ‘the overwhelming recurrent feedback [on early iterations] was to convert the document to a shorter, easier to read and less repetitive document that was more inclusive of the wider simulation community.’23 As a result, some standards were removed, reformatted or included as guidance within a larger standard.

ASPiH’s contribution

ASPiH addresses a dilemma common to each society: the conflict between what is considered best practice and what is supported by the literature. Validating best practice in simulation can be burdensome. Studies of effective simulation methodology are often overlooked in favour of studies of effective simulation content resulting in a gap in medical education literature. Not all simulation practices reach the level of relevance deemed necessary for rigorous study. As a result, educators often do what works anecdotally. By utilising a rigorous process in constructing their standards, ASPiH has determined best practices despite this gap.

The use of the implementation framework method adds a dimension of rigour that may be instructive to societies and networks in the development of professional standards. ASPiH’s thorough collection of responses to its standards is a benchmark for other organisations to follow, as is its use of a matrix for managing data obtained through those responses. Their cross-referencing of importance ascribed by responders with medical literature resulted in the pairing down of 71 original statements to a final 21 statements. The result is ‘a framework of standards that are evidenced based’ and have ‘passed the test of utility and relevance for use by the simulation community.’23

ASPiH’s transparency may also be instructive for other organisations. The housing of its surveys, participants, pilot sites and earlier drafts on the association’s website allows simulation educators to generalise ASPiH’s approach to developing standards to other content, practices and associations.

Conclusion

Professional competency in healthcare is rising in priority, driven in part by accrediting agencies. Because competency comprises knowledge, skills and attitudes, simulation is one appropriate method of assessment. Achieving reliable assessment in simulation requires the use of evidence-based, useful and relevant practices. Therefore, analysis of simulation practice and development of standards is critical.

The validation process used by ASPiH, in particular, could be of use to the healthcare education community as the use of simulation in assessment moves from certifying educational attainment to certifying readiness for professional practice.

References

View Abstract

Footnotes

  • CAB and KLL contributed equally.

  • Contributors The comparison of competency-based medical education to professional standards of practice among simulation societies is the idea of CAB. KLL contributed a history of professional standards. The two authors created an outline for this response together. CAB wrote the introduction and conclusion. The two authors edited each other’s work.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.