Simulation and Objective Structured Clinical Examination for Assessment

15







SIMULATION AND OBJECTIVE STRUCTURED CLINICAL EXAMINATION FOR ASSESSMENT



Simulation is used widely for instruction in nursing. In a simulation, students can gain knowledge about patient care, develop competencies in communication and teamwork, use clinical judgment and reflect on actions taken in the scenario, and develop psychomotor and clinical skills. Simulation also is a strategy for assessment, including high-stakes evaluation, if nurse educators adhere to guidelines to ensure validity and reliability. Some simulations incorporate standardized patients, who are actors who portray the role of a patient with a specific diagnosis or condition. With standardized patients, students can be evaluated on their history and physical examination skills, communication strategies, and other competencies. Another method for evaluating skills and clinical competencies of nursing students is an objective structured clinical examination (OSCE). In an OSCE, students rotate through stations where they complete an activity or perform a skill, which then can be evaluated. This chapter examines these methods for assessing clinical competencies of students.


Simulations


Simulation allows learners to experience a clinical situation without the risks. With simulations, students can develop their psychomotor and technological skills and practice those skills to maintain their competence. Simulations, particularly those involving high-fidelity simulators, enable students to think through clinical situations and make independent decisions. With high-fidelity simulation and complex scenarios, students can assess a patient and clinical situation, analyze data, make decisions about priority problems and actions to take, implement those interventions, and evaluate outcomes. High-fidelity simulation can be used to guide students’ development of clinical judgment skills, especially when combined with high-quality debriefing following the simulation (Chmil, Turk, Adamson, & Larew, 2015; Fey & Jenkins, 2015; Klenke-Borgmann, 2019; Lasater, 2007, 2011).


298Another outcome of instruction with simulations is the opportunity to have deliberate practice of skills. Simulations allow students to practice skills, both cognitive and motor, until competent and receive immediate feedback on performance (Kardong-Edgren, Oermann, & Rizzolo, 2019; Oermann, Molloy, & Vaughn, 2014; Oermann, Muckler, & Morgan, 2016; Owen, Garbett, Coburn, & Amar, 2017; Reed, Pirotte, et al., 2016; Sullivan, 2015). Through simulations, students can develop their communication and teamwork skills and apply quality and safety guidelines to practice. Increasingly simulations are used to provide experiences for students in working with other healthcare profession students and providers (Feather, Carr, Reising, & Garletts, 2016; Furseth, Taylor, & Kim, 2016; Horsley et al., 2016; Lee, Jang, & Park, 2016; Reed, Horsley, et al., 2016; Rutherford-Hemming & Lioce, 2018).


Given the limited time for clinical practice in many programs and the complexity of skills to be developed by students, simulations are important as a clinical teaching strategy. Simulations can ease the shortage of clinical experiences for students because of clinical agency restrictions and fewer available practice hours in a curriculum. A study by the National Council of State Boards of Nursing suggested that simulation may be used as a replacement for clinical experiences (Hayden, Smiley, Alexander, Kardong-Edgren, & Jeffries, 2014).


Guidelines for Simulation-Based Assessment


Simulations not only are effective for instruction in nursing, but they also are useful for assessment. The availability of high-fidelity simulators has expanded opportunities for performance evaluation (Mitchell et al., 2018). A simulation can be developed for students to demonstrate procedures and technologies, conduct assessments, analyze data presented in a scenario, decide on priority actions to take in a situation, and evaluate the effects of their decisions. Each of these outcomes can be assessed for feedback to students or for verifying students’ competencies for high-stakes evaluations. In a high-stakes evaluation, the student needs to demonstrate competency in order to pass the course or graduate from the nursing program, or for some other decision with significant consequences.


In formative assessment using simulation, the teacher, referred to as the facilitator, shares observations about the student’s performance and other behaviors with the student. The goal of formative assessment is to provide feedback to students individually and to the team, if relevant, to guide further development of competencies. This feedback is an essential component of the facilitator’s role in the simulation. In contrast, the goal of summative assessment is to determine students’ competence. Summative assessment verifies that students can perform the required clinical competencies.


There are different types of simulations that can be used for assessment. Case scenarios that students analyze can be presented in paper-and-pencil format or 299through multimedia. Many computer simulations are available for use in assessment. Simulations can be developed with models and manikins for evaluating skills and procedures, and for evaluation with standardized patients. With high-fidelity simulation, teachers can identify outcomes and clinical competencies to be assessed, present various clinical events and scenarios for students to analyze and then take action, and evaluate students’ decisions and performance in these scenarios. Prebriefing, the introductory phase of a simulation, prepares students for learning in the simulation. Page-Cutrara and Turk (2017) found that a structured prebriefing (with concept-mapping activities and guided reflection) improved students’ competency performance, clinical judgment, and prebriefing experience. Following the simulation in the debriefing session, the students as a group can analyze the scenario and critique their actions and decisions, with facilitators (and standardized patients) providing feedback. The debriefing also promotes students’ development of clinical-judgment skills (Dube et al., 2019; Fey & Jenkins, 2015; Klenke-Borgmann, 2019; Lasater, 2011; Victor, 2017). Many nursing education programs have simulation laboratories with high-fidelity and other types of simulators, clinically equipped examination rooms, manikins and models for skill practice and assessment, areas for standardized patients, and a wide range of multimedia that facilitate performance evaluations. The rooms can be equipped with two-way mirrors, video cameras, microphones, and other media for observing and performance rating by faculty and others.


In simulation-based assessment, the first task is to identify the objectives of the assessment and the knowledge and competencies to be evaluated. This is important because these guide developing the simulation and writing the scenario. If the competencies are skill oriented, use of a high-fidelity simulator may not be necessary and a model or partial task trainer, which allows students to perform a specific task such as venipuncture, can be used for assessment. In contrast, if the assessment is to determine students’ ability to analyze a complex clinical situation, arrive at clinical judgments about the best approaches to take, demonstrate a range of clinical competencies, and communicate effectively, assessment with high-fidelity patient simulators would be appropriate.


Once the objectives of the assessment and the knowledge and skills to be evaluated are identified, the teacher can plan the specifics of the assessment. The assessment processes need to be defensible if high-stakes decisions will be made based on the assessment (Tavares et al., 2018). The simulation needs to focus on the intended purpose and require that students use the knowledge and competencies. This is a key principle for the simulation to be valid. Validity is the extent to which the simulation measures what it was intended to measure (Boulet & Murray, 2010; Oermann, Kardong-Edgren, & Rizzolo, 2016a; O’Leary, 2015). The teacher should have colleagues and other experts review the simulation to ensure that it is appropriate for 300the objectives, that students would need to apply their knowledge and skills in it, and that it represents a realistic clinical scenario. This review by others helps establish the validity of the simulation. For high-stakes decisions, the simulation can be piloted with different levels of students in the nursing program. The performance of senior-level or graduate nursing students in a simulation should be different from that of beginning students. Piloting the simulation with different levels of students also may reveal issues to be resolved before using it for an assessment.


Some simulations for assessment are short, for example, a few minutes, such as when evaluating performance of skills. However, when evaluating students’ competencies such as communication ability and teamwork, the simulation may last 30 minutes (Mudumbai, Gaba, Boulet, Howard, & Davies, 2012). When the aim is to provide feedback on or verify students’ clinical judgment, their ability to manage a complex scenario, and other higher level skills, it is likely that longer simulations will be needed.


A key point in evaluating student performance in simulation for high-stakes and other summative decisions is the need for a tool that produces valid and reliable results. An example of a rating scale used for evaluating students in a simulation, with demonstrated validity and reliability, is the Creighton Competency Evaluation Instrument (C-CEI; Todd, Manz, Hawkins, Parsons, & Hercinger, 2008). The C-CEI includes 22 nursing behaviors that can be observed and evaluated in a simulation. These behaviors are grouped into four categories:



    Assessment (e.g., collection of pertinent information about the patient and the environment)


    Communication (e.g., with the simulated patient and team members, documentation, responses to abnormal findings)


    Clinical judgment (e.g., interpretation of vital signs and laboratory findings, performance and evaluation of interventions)


    Patient safety (e.g., patient identifiers, medications, technical performance)


The C-CEI is available at https://nursing.creighton.edu/academics/competency-evaluation-instrument.


Even with a validated tool, however, evaluators using it may not interpret the behaviors similarly nor score them as intended. With high-stakes evaluation, the evaluators need a shared mental model about the performance expected in the simulation and to agree on specific behaviors that would represent successful performance of the competencies (Oermann, Kardong-Edgren, & Rizzolo, 2016b). In one study, evaluators had extensive training on using the C-CEI for observing and rating performance in a simulation for high-stakes evaluation (Kardong-Edgren, Oermann, Rizzolo, & Odom-Maryon, 2017). The training extended over a period of time and included refreshers. Nine of 11 raters developed a shared mental model for scoring and were 301consistent in their ratings of student performance. However, two raters, even with this extensive training, were outliers and were inconsistent with other evaluators and in their own scoring (intrarater reliability). Findings emphasized the importance of training faculty members or whoever is rating performance in the simulation on the tool and behaviors that would indicate successful performance of the competencies, and ensuring that all evaluators are competent to judge performance.


Tools for high-stakes evaluation with simulation can provide for analytic or holistic scoring similar to scoring for essay items and written assignments, which was discussed in earlier chapters. With analytic scoring, the evaluator observes student performance and typically rates each component of the performance. An example of analytic scoring is use of a skills checklist: The evaluator observes each step of the skill, verifying that it was performed correctly. Holistic scoring, in contrast, allows the evaluator to observe multiple behaviors in a simulation and rate the performance as a whole. Rating scales are examples of holistic scoring—these tools provide a means of rating a range of competencies, including some that are complex. The C-CEI is an example of a holistic tool used for assessing performance in a simulation. For some objectives, knowledge, and competencies to be assessed, multiple tools would be appropriate, some to rate skills in a simulation and others to provide a global rating of performance (Oermann et al., 2016a).


If the assessment is for high-stakes decisions, more than one evaluator should observe and rate performance. As discussed in earlier chapters, teachers may focus on different aspects of a performance. With more than one evaluator, the ratings can be combined, providing a fairer assessment for the student. Assessment for high-stakes decisions should be done by teachers who have not worked previously with the students. This avoids the chance of bias, positive or negative, when observing the performance. If the performance is video recorded, the evaluators can rate the performance independently to avoid influencing each other’s scores.


Reliability is critical for a high-stakes assessment. With interrater reliability, different evaluators are consistent in their ratings of the performance and decisions about whether students are competent. There also should be intrarater reliability—if the evaluators observed the student a second time, those ratings would be similar to the first observation. It is generally easier to obtain reliability with an assessment of skills and procedures using a checklist than with a rating scale. The competencies on a rating scale are broader, for example, communicates effectively with providers, which allow for different interpretations and judgments (Oermann et al., 2016a).


Evaluators must be trained in assessment and use of the tool. This is critical to establish reliability. Everyone involved in the assessment needs to be aware of the objectives and the knowledge and competencies to be evaluated, and they need to have a shared understanding of the meaning of each item on the tool and what competent performance would “look like.” The observations and interpretations of performance by the evaluators must be accurate. Errors that can occur with rating scales 302were presented in Chapter 14, Clinical Evaluation Methods. These also relate to simulation-based assessments. One type of error occurs when the evaluator only rates performance at the midpoint of the rating scale; this is an error of central tendency. Or evaluators may be too lenient, rating student performance in the simulation at the high end of the scale, or too severe, rating it at the low end, regardless of the quality of the performance. If the rating form has too many specific competencies on it or a checklist has too many discrete steps, a logical error might occur; the same rating is given for related items on the tool without the evaluator observing each one. Similar to clinical evaluation and grading essay items and written assignments, if evaluators know the student being observed in the simulation, they may have an impression of the student that influences their current evaluation of the performance (a halo effect). This is why one of the recommendations for high-stakes evaluations using simulation is that evaluators should not know the student being observed.


During the training, evaluators should discuss the tool and come to consensus about the meaning of each competency and performance expected in the simulation. This is critical to have a shared interpretation (a mental model) of what competent performance would look like. Evaluators should practice using the tool for rating performance, for example, rating the performance of a student in a video recording or on YouTube, and discuss their ratings with each other. When one evaluator is used for the assessment, the evaluator should practice using the tool and discuss ratings with colleagues to ensure similar interpretations of the competencies and observations. Exhibit 15.1 provides a summary of these key steps in using simulation for assessment.


 

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Apr 18, 2020 | Posted by in NURSING | Comments Off on Simulation and Objective Structured Clinical Examination for Assessment

Full access? Get Clinical Tree

Get Clinical Tree app for offline access