A guide to simulation scenario design for clinicians involved in simulation-based education.
The ultimate goal of simulation is to create an authentic learning experience (P. Dieckmann et al., 2007) and this starts with scenario design. However, scenario design is an under-appreciated area in the field of health simulation. A scenario is an event to stimulate discussion (the debrief), generate research data, or provide an opportunity for assessment. Robust design of scenarios is crucial to ensure objectives are reached in a way that is meaningful and valid to the participants, instructors and researchers. Scenarios have been termed ‘Theatre with a purpose’ (P. Dieckmann, Gaba, & Rall, 2007) or ‘Trojan horses’ to stimulate discussion in health education, and as such are simply a means to an end. When developing an effective scenario, the purpose for running the scenario is the starting point. In other words, rather than the initial discussion being “Let’s design a scenario about disease X”, a more structured process begins with a rationale and outcome measure for a scenario’s success.
The simulation episode is the focus of scenario design, yet this phase is often brief, perhaps only ten minutes in duration. More time is spent on the debrief, when the ‘Trojan horse’ is revealed for what it is and reflective learning begins. However, for there to be a story to discuss, the story must first be told. Running the simulation tells the story.
In this article will investigate how to define the purpose and design a scenario to address that purpose. In most instances we will be addressing simulation as a teaching tool; with an emphasis on mannequin-based simulation, which is the area of the authors’ expertise and arguably the most common form of simulation in health. Nevertheless, these basic principles can be applied to other methods of simulation such as simulated patients (Nestel, Fleishman, & Bearman, 2015) or virtual reality, and for other purposes of simulation such as research and translational simulation (Brazil, 2017).
A standardised template is useful for scenario design. An example is that used by The Alfred Hospital Intensive Care Unit available from this URL: http://intensiveblog.com/resources/icu-sim/. Templates act a checklist to ensure all factors affecting simulation delivery are addressed, including the setup requirements.
The purpose of simulation is often evident. Commonly it is to address knowledge or skills gaps that have been identified in clinical practice, or to provide a practical element to an existing curriculum. These knowledge or skills gaps can be identified by assessing learner needs (see below) or from clinical events or incident reports. Occasionally there is a change in practice or process that needs to be communicated or investigated before it is accepted into the work environment (Brazil, 2017).
An important educational question to consider is the balance between the technical and non-technical skills to be addressed. Technical skills are often initially addressed outside of a realistic high immersion simulation by use of procedural trainers or pre-reading and discussion of factual information. This allows learners skill implementation with progression to more complex and lifelike simulated situations, and ultimately to supported clinical environments. Non-technical skills, such as teamwork and communication, are increasingly recognised as important in clinical care.
Key Design Decisions
Once the learning objectives are identified, it is important to ask: ‘Is simulation the best way of achieving these objectives?” For example, a mannequin-based simulation scenario is not the best way of teaching the details of renal physiology. In terms of learning outcomes for a simulation teaching session, simulation suits the higher levels of Bloom’s taxonomy, particularly those associated with application of knowledge and skills into the clinical environment (Bloom, 1956).
Key design decisions must be made once it is decided that a simulation-based learning experience is the best way to meet the chosen learning objectives. These decisions include choosing the simulation modality, the level of realism or authenticity required, and the feasibility.
There are always constraints in scenario design related to the equipment, environment and even the time required to design and run the scenario. For example, it may be difficult to create an in-situ scenario that uses key equipment needed for patient care while the scenario is being run, or have a scenario that realistically evolves slowly in real-time being run over prolonged periods due to the time constraints of the clinical area or staff availability.
Numerous other factors affect how a simulation is run and need to considered during scenario design. These include the needs and background of the learners, the number of faculty available and their expertise, the complexity of the equipment used, the space available, and the preferred approach to debriefing. For example, at one extreme, a single faculty member with a hand-held monitor control device (e.g. iSimulate, SimMon) may stand in the room with participants and run a “pause-and-discuss” with novice learners. At the other extreme, advanced learners involved in team training may be engaged in a highly immersive, uninterrupted simulation aided by a confederate. Meanwhile, a technician and two debriefers in another room observe everything that happens using video cameras and a one-way mirror prior to a debrief being performed after the simulation ends. The availability of resources and the feasibility of scenario design should be checked with the intended faculty and simulation technicians and coordinators early in the design process.
Inspiration: Real clinical cases
Simulation scenarios can be created from real clinical cases. Advantages of real cases are relevance, motivation, peer review and teaching, and the availability of realistic props. Disadvantages are the risks to participants, potential breaches of patient or staff confidentiality, and lack of correlation with well-defined objectives, that is, the simulation is based on the opportunity provided by the clinical case rather than a fit with a planned objective or curriculum.
Sam, an anaesthetist, was called to assist a senior trainee shortly after induction of anaesthesia for a patient about to undergo emergency craniotomy. The patient was hypotensive and anaphylaxis was quickly diagnosed. A cognitive aid used to institute treatment and allow surgery to continue. Communication between team members was clear and effective and a post incident debrief revealed that the team were comfortable with the management of the case. The learning points from the case inspired a simulation session on deterioration of a patient under General Anaesthesia the following week.
Case 1 has clear overlap with the objectives of the future, planned session on deterioration under anaesthesia. It has a mix of technical and non-technical factors involved around the theme and is likely to be psychologically safe, but it would be wise to check with the staff involved. Patient confidentiality is a potential issue, but patient demographics can be easily changed and patient data used as props (such as an electrocardiograph or a radiological imaging) can de-identified. Consideration needs to be given as to how cues of anaphylaxis can be represented and how strongly the diagnosis or the required treatment is signalled to participants.
Similar to case 1, the diagnosis of anaphylaxis was delayed and adrenaline was withheld because of concerns about raised intracranial pressure. The patient had an adverse outcome from prolonged hypotension and the clinicians were criticised by colleagues on the day and at a subsequent morbidity and mortality review. The trainee involved was distressed by the incident and felt he was to blame. Both the anaesthetist involved and the unit head request the simulation centre run the case to show others how it “should have been managed”.
Case 2 involves a recent, emotionally charged event and harm has been caused to the patient. The clinicians involved are ‘second victims’ and the incident may be recognised by team members present in the simulation. Themes from this event may be used, with or without reference to the original case (eg undifferentiated hypotension) and suitability of the case will partly depend on the degree of trust between sim staff and participants. It would be unwise to use this case in its present form.
Design considerations for time shifting
The flow of time in simulation scenarios can be deliberately manipulated in different ways. “Pause and discuss” debriefing during simulation delivery (rapid summary, reframe, redirect) is often used with relatively junior teams or for those facing sim challenges beyond their comfort zone. It allows redirection and sets up for success. “Pause and rewind” allows the learners to go back and “fix” mistakes. Gamification with level progression (“live, die, repeat”) has potential for learners to face a challenge with no satisfactory end solution but is higher risk and needs experienced facilitation (Sunga, Sandefur, Asirvatham, & Cabrera, 2016). In general, we should aim for a scenario that feels intrinsically fair and could be realistically encountered by the participant group in their clinical environment.
Inter-professional and inter-specialty simulation
These should be co-written or at least vetted by members of the different “tribes” (i.e. relevant sub-speciality and inter-disciplinary groups) This needs to include input and agreement on objectives. Avoid the trap of participant nurses being used as “props” for medical participants, and for the viewpoint of one tribe to predominate. Avoid clichés especially of poorly performing staff or professionalism issues belonging to a particular tribe. Write these roles for confederates rather than participants and de-role explicitly during the debrief.
Learner Needs and Background
The needs of the learners, and their backgrounds, must be at the forefront of the mind of the scenario designer. Learner needs and abilities can usually be determined to some extent by observing performance in previous clinical and simulation-based interactions, an awareness of their level of training, and knowledge of their curriculum. ‘Learning needs analysis’ questionnaires can be helpful, but do not identify the learner’s ‘unknown unknown’ needs. Similarly, formal examinations (‘pre-tests’) still have limitations in the scope and depth of knowledge they assess. Both strategies demand greater logistical and time commitments from staff and learners.
The knowledge, skills, and attitudes of the learners can never be completely known to the scenario designer, and will vary between learners. This increases the challenge for simulationists of how to extend the learners just beyond what they were originally capable of doing themselves. According to Vygotsky, attaining this ‘Zone of Proximal Development’ is crucial to learning (Vygotsky, 1978). Similarly, Ericsson has emphasised that learners need to perform ‘outside of their comfort zones’ to progress toward expertise through deliberate practice (Ericsson, 2008).
One method of meeting this challenge is to create scenarios that can be ‘ramped’ up or down in terms of level of difficulty. Within the scenario design there can be different options that are triggered according to the learners’ progress. Progress can be judged by the completion of tasks, the time taken, and whether errors are made. Delivery of the simulation then becomes a game of ‘snakes and ladders’, with different challenges and supports provided to maintain balance and optimise the learning experience. An important principle is to vary the “signal-to-noise” ratio according to the learners’ level of performance. For instance, novices learning to recognise a tension pneumothorax will need obvious clinical signs with no distractors. Whereas, more proficient learners will be expected to diagnose more subtle presentations even in the midst of distractors and competing priorities (Dreyfus & Dreyfus, 1980). The scenario’s level of difficulty can be titrated by adjusting complexity (e.g. additional problems or patients), the standard of performance required, the circumstances (e.g. rare presentations or difficult settings such entrapment in a bathroom), the degree of time pressure (e.g. rate of deterioration), and adding in distractors (e.g. environmental noise, or actors who divert the learners from their tasks) or stressors such as interpersonal conflict.
Anticipating likely knowledge gaps allows the scenario designer to incorporate supports to help the learners meet the challenge provide. Most useful is the integration of a confederate into the simulation (see below). The scenario can also be designed to include resources for learner ‘pre-reading’ in preparation for the simulation episode and additional resources for further learning and reflection after the debrief (‘post-reading’).
Resources: Staff, Simulators, Equipment, and the Environment
To create an authentic learning experience, simulationists must strive to help participants maintain a state of engagement using the available resources — staff, simulators, equipment, and the environment. These resources must be considered during scenario design.
Multiple staff roles are required to run a simulation and they must be considered during scenario design. The simulation technician operates the simulator and ensures that it responds to participant actions appropriately. The debriefer carefully observes the scenario to identify noteworthy events and behaviours and facilitates their discussion during the debrief. The confederate is an embedded faculty member who works with the participants, yet is aware of the scenario design and through team interactions can provide prompts, confirm findings (especially those that are difficult to simulate, such as a seizure), and help redirect the team if they do the unexpected (Nestel, Mobley, Hunt, & Eppich, 2014). Other simulated patients, confederates, and/or actors may also be required depending on the complexity of the scenario. It is possible to run simulations with a single faculty member performing all of these roles. However, having a separate confederate is much preferred as it allows the debriefer/ technician to be completely external to the simulation. Separate, specific information can be provided to brief different faculty roles in different sections of the scenario design template (e.g. information to brief the confederate, a run sheet for the technician, and a debrief guide for the debriefer).
Surprisingly, the ‘realism’ of the simulator, and the equipment and props used (their ‘physical fidelity’) is usually less important than ensuring that, however basic they are, they are capable of performing the functions needed to meet the specific learning objectives (a concept termed ‘functional task alignment’) (Hamstra, Brydges, Hatala, Zendejas, & Cook, 2014). For instance, straws and a styrofoam cup may be used to teach uroscopic procedures just as well as an expensive simulator (Matsumoto, Hamstra, Radomski, & Cusimano, 2002). Nevertheless, a useful rule of thumb when simulating a clinical task is to aim for a high level of realism for the body part to be instrumented and the equipment to be used, and less realism is needed to represent the wider environment according to the participant’s ‘circle of focus’ (Kneebone, 2010). Similarly, if authentic patient interaction is required, then a scenario should be designed with a standardised patient in mind, or a ‘hybrid’ simulation (e.g. a patient wearing a limb prosthesis that allows safe IV cannulation), rather than a manikin. It also makes sense for learners to perform tasks with the same equipment that they will actually use in the workplace. An extension of this, is to perform simulation in situ, so that participants learn to function in the most authentic workplace environment possible (Petrosoniak, Auerbach, Wong, & Hicks, 2017). However, in situ simulation poses additional challenges for scenario design, not least the need to be able to set up and take down the scenario quickly, the risk of contaminating the work environment with “fake” equipment and drugs, and the constraints of time, space and privacy for observation and debriefing (Raemer, 2014). Regardless of the equipment used, the faculty delivering the simulation must be completely at ease with its use and be able to trouble-shoot problems on the run.
Video can also be incorporated into the scenario design in different ways. For instance, playing a video of a patient in distress prior to interacting with a mannikin can prime the participants and improve authenticity. Video can also be used by the staff for observation and debriefing. Although evidence confirming a benefit for video-assisted debriefing is lacking (Cheng et al., 2014), there are many practical advantages. These include allowing additional observers to watch the simulation without being in the room, being able to observe events that may be hard to see through a crowd (e.g. using an overhead camera), and showing participants exactly what happened as part of a debrief.
Case Vignette and Scenario Flow
Scenario design begins with a realistic case vignette. The details of the case are important, as they should facilitate the learning objectives, be able to be portrayed in simulation, and contribute to an authentic learning experience. Participants can be briefed with a case history during the simulation prebrief, otherwise they will need to gather this information as the scenario unfolds (e.g. during an unannounced scenario or “guerilla sim”).
The flow of the scenario stems from the case vignette and begins with initial parameters (e.g. pre-determined patient vital signs) that will evolve over time. The rate at which these parameters change should be as realistic as possible, however there is license to alter the rate of patient deterioration and titrate the level of challenge according to learner expertise. Scenario design typically incorporates planned events and transitions in these parameters, together with realistic responses to anticipated interventions. This can be usefully represented as tables or flow diagrams in scenario templates. In this way an unfolding story is mapped out, and it is then up to the simulation delivery staff to follow it! Transitions and responses to specific actions can be pre-programmed with many types of simulator (e.g. apnoea following administration of a neuromuscular blocking drug). However, during simulation delivery, the staff need to remain flexible and ensure that simulator responses match participant actions even when the unanticipated occurs. For instance, if the participants adminster rocuronium instead of suxamethonium the preprogrammed response to neuromuscular blockade should not include fasciculations…
“Scenario life savers” are a key component of successful scenario design. These are strategies for dealing with participant actions that are unanticipated and/or risk derailment of the scenario and failure to meet the learning objectives (P. Dieckmann, Lippert, Glavin, & Rall, 2010). Despite best laid plans and best intentions, participants can wander down blind alleys if they miss clinical cues, experience technical glitches, or behave unexpectedly because of, for example, simulation-induced hypervigilance (“simulitis”). By titrating ‘signal-to-noise’ in the scenario design and delivery (e.g. making clinical signs more obvious) participants can be usually be coralled in the right direction. However, when such measures fail, scenario life savers are required. The best option is usually to use an embedded confederate to impart information that prompts or redirects the participants. Alternative options are to introduce other actors (e.g. the friendly anaesthetist who has come to assist) or to use a patient voice (either a standardised patient or a mannikin with a speaker). Less ideal are directions from an instructor standing in the room or speaking over a loud speaker as “the Voice of God”. Such direct communication from staff external to the simulation milieu risks “de-immersing” the participants and losing of authenticity.
Wherever possible, difficulties should be anticipated during scenario design and solutions provided. These should be part of the briefing for confederates as well as the technician’s ‘run sheet’. Iterative amendments can be made when new issues are discovered with subsequent use of the scenario.
Assessment of the participants’ performance can be formally structured as part of a formative or summative evaluation of the student, or is informal and feeds into the debrief process. In keeping with good practice of assessment, the rubric must be clear and related to the learning outcomes. Increasingly, the rubric is available to students either before the simulation event, or to the observers watching their colleagues perform in the simulation. As with any summative evaluation, it is important to define the expected standard of performance using a standard setting exercise. The design of simulation-based examinations is beyond the scope of this article.
Debriefing is arguably the most important phase of a simulation-based learning experience and should be carefully considered during scenario design. A debriefing guide is important to document in the scenario plan. Brief notes about design objectives, suggested topics, and effective questions to ask during the debrief are useful. If the scenario is part of a course involving multiple scenarios, the debriefing guide can help align the topics covered in each scenario. If pause and discuss debriefing is to be used then suggested scenario progression points for a pause should be indicated. However, the need for flexibility and ability to respond to the individual scenario outcomes and participants is important. When applicable, scenario notes should also include the intent to use video recording and playback during debriefing.
A simulation scenario is rarely fit for purpose immediately after being written. The initial design should be considered merely a prototype to be iteratively improved. Review of the scenario by colleagues is essential to clarify the purpose and practical running of the event. Even more useful is piloting the scenario using clinician colleagues with a simulation education background. These trials give valuable feedback on plausibility, authenticity and alignment with the intended learning outcomes. It is common at this point to identify deviation of the participant’s actions away from the actions that are intended by the scenario designers. Even if educators can’t be found for test runs, merely walking through with another clinician as a story can provide crucial insight into the scenario’s shortcomings.
Good scenarios commonly ‘drift’ into different interpretations over time. This is understandable if many educators are using the same scenario and providing a slightly different emphasis on the learning outcomes, or clinical practice changes through new guidelines or technologies. For example, as educators, the authors have observed that in some areas the participants skill set has changed. Trainees of the same level that in earlier years were unable to perform an emergency cricothyroidotomy are now prepared to because of changes in the broader training curriculum. Correspondingly there must be a shift in the simulation from an application of skills (positioning, preparation, equipment) to communication of the event and coordination of the team. This is just one example of appropriate scenario drift, but if these changes are not explicit in the scenario documentation it can lead to a scenario losing the connection with the intended purpose.
We recommend that every scenario is formally or informally evaluated at the end of each use: Did it meet the intended purpose? Did the participants require ‘life savers’ or additional challenges? Were there aspects of the authenticity that affected their engagement? Merely annotating with pen or pencil onto the printed, paper-based scenario is not sufficient as this generates multiple versions of the same scenario. Preferably, evaluations should be documented after every time the scenario is run. If this isn’t possible, a table of changes or ‘version control’ should be included along with the dates and reasons for any changes and old versions retained as an electronic archive.
Regardless of source, scenarios need to align with objectives and the needs of the participants, and be feasible to run and debrief with the resources available while ensuring appropriate challenge and psychological safety. A final test of fairness is reasonable. The scenario should be one that could happen to the participants in their clinical environment, and the objectives and aims cross into learning relevant to that environment.
Scenario design is a frequently neglected area of training for simulation educators. It requires careful attention to detail, not least to match the purpose of the simulation and the intended learners. Even when the scenario has been created it needs re-evaluation and monitoring to ensure the content and debrief are aligned.
Using a standard template helps designers remember the key points and ensures sufficient information is provided for sharing in within the institution and more widely.
Even with a standardised template it is useful to perform a final check of the scenario and indeed throughout the design process:
- Is the scenario aligned with the objectives?
- Is it feasible?
- Is it appropriately challenging?
- Is it physically and psychologically safe?
- Wiesbauer F. Teaching Masterclass: The Psychology of Learning. Medmastery
- Bloom, B. (1956). Taxonomy of Educational Objectives, the classification of educational goals – Handbook I: Cognitive Domain. New York: McKay.
- Brazil, V. (2017) Translational simulation: not ‘where?’ but ‘why?’ A functional view of in situ simulation. Adv Simul 2: 20.
- Cheng, A., Eppich, W. J., Grant, V., Sherbino, J., Zendejas, B., & Cook, D. A. (2014). Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Medical Education, 48(7), 657-666.
- Dieckmann, P., Gaba, D., & Rall, M. (2007). Deepening the theoretical foundations of patient simulation as social practice. Simulation in Healthcare, 2(3), 183-193.
- Dieckmann, P., Lippert, A., Glavin, R., & Rall, M. (2010). When things do not go as expected: scenario life savers. Simulation in Healthcare, 5(4), 219-225.
- Dreyfus, S. E., & Dreyfus, H. L. (1980). A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition. Retrieved from Washington, DC:
- Ericsson, K. A. (2008). Deliberate practice and acquisition of expert performance: A general overview. Academic Emergency Medicine, 15, 988-994.
- Hamstra, S. J., Brydges, R., Hatala, R., Zendejas, B., & Cook, D. A. (2014). Reconsidering fidelity in simulation-based training. Academic Medicine, 89(3), 387-392.
- Kneebone, R. (2010). Simulation, safety and surgery. Qual Saf Health Care, 19(Suppl 3), i47-52.
- Matsumoto, E. D., Hamstra, S. J., Radomski, S. B., & Cusimano, M. D. (2002). The effect of bench model fidelity on endourological skills: a randomized controlled study. J Urol., 167(3), 1243-1247.
- Nestel, D., Fleishman, C., & Bearman, M. (2015). Preparation: developing scenarios and training for role portrayal. In D. Nestel & M. Bearman (Eds.), Simulated Patient Methodology: Theory, Evidence and Practice: Wiley Blackwell.
- Nestel, D., Mobley, B. L., Hunt, E. A., & Eppich, W. J. (2014). Confederates in Health Care Simulations: Not as Simple as It Seems. Clinical Simulation in Nursing, 10(12), 611-616.
- Petrosoniak, A., Auerbach, M., Wong, A. H., & Hicks, C. M. (2017). In situ simulation in emergency medicine: Moving beyond the simulation lab. Emergency Medicine Australasia, 29(1), 83-88.
- Raemer, D. B. (2014). Ignaz semmelweis redux? Simulation in Healthcare, 9(3), 153-155.
- Sunga, K., Sandefur, B., Asirvatham, U., & Cabrera, D. (2016). LIVE. DIE. REPEAT: a novel instructional method incorporating recursive objective-based gameplay in an emergency medicine simulation curriculum. BMJ Simulation and Technology Enhanced Learning, 2(4), 124.
- Vygotsky, L. S. (1978). Mind and society. Cambridge, MA: Harvard University Press.
Chris is an Intensivist and ECMO specialist at the Alfred ICU in Melbourne. He is also a Clinical Adjunct Associate Professor at Monash University. He is a co-founder of the Australia and New Zealand Clinician Educator Network (ANZCEN) and is the Lead for the ANZCEN Clinician Educator Incubator programme. He is on the Board of Directors for the Intensive Care Foundation and is a First Part Examiner for the College of Intensive Care Medicine. He is an internationally recognised Clinician Educator with a passion for helping clinicians learn and for improving the clinical performance of individuals and collectives.
After finishing his medical degree at the University of Auckland, he continued post-graduate training in New Zealand as well as Australia’s Northern Territory, Perth and Melbourne. He has completed fellowship training in both intensive care medicine and emergency medicine, as well as post-graduate training in biochemistry, clinical toxicology, clinical epidemiology, and health professional education.
He is actively involved in in using translational simulation to improve patient care and the design of processes and systems at Alfred Health. He coordinates the Alfred ICU’s education and simulation programmes and runs the unit’s education website, INTENSIVE. He created the ‘Critically Ill Airway’ course and teaches on numerous courses around the world. He is one of the founders of the FOAM movement (Free Open-Access Medical education) and is co-creator of litfl.com, the RAGE podcast, the Resuscitology course, and the SMACC conference.
His one great achievement is being the father of three amazing children.
On Twitter, he is @precordialthump.
Dr Ian Summers is an Emergency Physician and Educator and Director of Simulation at Monash Health. He has been Director of Emergency Training at St Vincent’s and helped recruit and set up the emergency team at Casey Hospital as its initial deputy director. You can find him out wandering through the bush with his family, taking photographs of sunsets or wildlife.
Stuart is a practicing anaesthetist, simulation educator and researcher with interests in Patient Safety and Human Factors / Ergonomics. He is Honorary Clinical Associate Professor, Medical Education, at the University of Melbourne and Senior Research Fellow, Department of Anaesthesia and Perioperative Medicine, at Monash University.
His research includes investigation of the effects of cognitive aids (checklists and algorithms) on team functioning during emergencies and on simulation as an educational technique to teach patient safety and improve patient and health worker outcomes. He has been involved in the development of several innovative patient safety courses for both undergraduate and postgraduate students and has been closely associated with the Masters of Perioperative Medicine for which he co-supervises a unit on Human Factors and Patient Safety (POM5005). He also convenes the clinical Human Factors Short Course (https://www.clinicalhumanfactors.org.au)
Stuart is also the convenor of the 9th International Clinical Skills Conference in Prato, Italy in 2021 and Founding director of the charity that runs the conference: The International Clinical Skills Foundation Inc.
He is a member of the International Advisory Panel for 'Anaesthesia' Journal editorial board and Associate editor for the European Society for Simulation (SESAM) 'Advances in Simulation' journal