The evaluation process: an overview

24


The evaluation process: an overview


Mary P. Bourke, PhD, RN, MSN and Barbara A. Ihrke, PhD, RN


Nursing faculty are responsible for evaluating student learning, course, curriculum, and program outcomes as well as their own teaching practices. They are accountable to students, peers, administrators, employers, and society for the effectiveness of the nursing program. The purpose of this chapter is to present an overview of the process by which nursing faculty can evaluate instructional and program outcomes and report results to stakeholders. Terminology is often interchangeable between evaluation of classroom instruction and program evaluation. This chapter is a link to subsequent and previous chapters that cover specific evaluation activities and strategies. The chapter delineates a step-by-step evaluation process including the use of models; selection of instruments; data collection procedures; and the means to interpret, report, and use findings. Results can be used to make decisions about improvement in student learning; faculty performance; and course, curriculum, and program quality.




Definition of terms


The many terms used to describe evaluation and evaluation activities are often used interchangeably. The following definitions are used throughout this chapter.




Evaluation

Evaluation is a means of appraising data or placing a value on data gathered through one or more measurements. Evaluation also involves rendering judgment by pointing out strengths and weaknesses of different features or dimensions. Evaluation is the “implementation form for accountability as well as one of the basic ways of assuring quality” (Kai, 2009, p. 44). Rossi, Lipsey, and Freeman (2004) describe evaluation as judging a performance based on selected outcomes and standards. In education, evaluation assesses data collected through various methods to measure the outcome of the teaching–learning process.




Assessment

Assessment, in the broadest view, refers to processes that provide information about students, faculty, curricula, programs, and institutions to various stakeholders. More specifically, assessment refers to measures of student abilities and changes in knowledge, skills, and attitudes during and after participation in courses and programs (Angelo & Cross, 1993; Davis, 1993; Gates et al., 2002). Assessment data can be obtained to place students in courses, to provide information about learning needs (see Chapter 2), and to determine achievement in individual courses and programs (see Chapters 16 and 25 to 28). Findings are used to improve student learning and teaching (see Chapter 16) and to improve courses and programs.




Program evaluation

Educational program evaluation or program review “can be defined as a systematic operation of varying complexity involving data collection, observations and analyses, and culminating in a value judgement” (Mizikaci, 2006, p. 41). Program reviews are typically conducted by the faculty as a self-study (see Chapter 28) and are undertaken to respond to accreditation reviews by state, education, and professional accrediting bodies (see Chapter 29).



Accreditation

Accreditation is “a voluntary, peer review process that has been a hallmark of quality in American higher education and professional education for decades” (Grumet, 2002, p. 114). This process serves as a mechanism to ensure the quality of educational programs. Accreditation signifies that an institution, school, or program has defined appropriate outcomes, maintains conditions in which they can be met, and is producing acceptable outcomes (Millard, 1994). According to Alstete (2004), accreditation can be viewed as “a positive, active learning exercise” (p. 3). Accreditation occurs following a period of self-study, evaluation, and periodic review and is primarily focused on the mission of the institution and on student outcomes.


Schools of nursing may be accredited by state and regional agencies as well as national nursing organizations. Historically and currently there are two organizations of interest in accrediting nursing education programs: the National League for Nursing Accrediting Commission and the Commission on Collegiate Nursing Education. Effective accreditation programs must be simple, relevant, and cost effective. Regardless of the agency or organization providing accreditation services, nursing faculty must be aware of the standards and participate in the process of their evaluation and review. See Chapter 29 for further information about accreditation of nursing programs.



Philosophical approaches to evaluation


A philosophy of evaluation involves the evaluator’s beliefs about evaluation. The philosophy will influence how evaluations are conducted, when evaluations are conducted, what methods are used, and how results are interpreted. A philosophy is reflected in attitudes and behavior.


In nursing education, evaluations or judgments are made about performance (students), program effectiveness (a nursing curriculum or program), instructional media (a textbook, a computer-assisted instruction program), or instruction (course, faculty). Evaluation activities in nursing education are conducted from various perspectives, and these perspectives influence outcomes. Therefore evaluators should be aware of the perspective or orientation as they relate it to the evaluation process.


Several philosophical perspectives tend to influence evaluation. Educators who rely on goals, objectives, and outcomes to guide program, course, or lesson development will likely have an objectives approach to evaluation. The merits of the activity or program are largely indicated by the success of students meeting those objectives. A service orientation toward evaluation emphasizes the learning by students and includes self-evaluation, thus assisting educators to make decisions about learners and the teaching–learning process. Although all evaluation involves judgment, the evaluator with a judgment perspective will focus on establishing the worth or merit of the employee, student, product, or program. Others have a research orientation to evaluation and emphasize precision in measurement and statistical analysis to gain a general understanding of why students and programs do or do not succeed. The focus in this perspective is on tools, methods, and designs as they relate to validity and reliability of instruments. Yet another orientation is the constructivist view, which emphasizes the values of the stakeholders and builds consensus about what needs to be changed. Although faculty, in their role as evaluators, use a combination of these perspectives, one is likely dominant, and faculty should be aware of the perspective they bring to the evaluation process because their philosophical orientation toward evaluation will guide the evaluation process and influence outcomes.



The evaluation process


Evaluation is a process that involves the following systematic series of actions:



The steps can be modified depending on the purpose of the evaluation, what is being evaluated (e.g., students, instruction, program, or system), and the complexity of the units being evaluated.




Identifying the purpose of the evaluation

As in the research process, the first step in the evaluation process is to pose various questions that can be answered by evaluation. These questions may be broad and encompassing, as in program evaluation, or focused and specific, as in classroom assessment (Box 24-1). Regardless of the scope of the evaluation, the purpose or reason for conducting an evaluation should be clear to all involved.




Identifying a time frame

The next step in the evaluation process is to consider when evaluation should occur. Time frames for evaluation can be described as formative or summative.



Formative evaluation

Formative evaluation (or assessment) refers to evaluation taking place during the program or learning activity (Kapborg & Fischbein, 2002). Formative evaluation is conducted while the event to be evaluated is occurring and focuses on identifying progress toward purposes, objectives, or outcomes to improve the activities, course, curriculum, program, or teaching and student learning. Formative evaluation emphasizes the parts instead of the entirety. The aim of formative evaluation “is to monitor learning progress and to provide corrective prescriptions to improve learning” (Gronlund & Waugh, 2009, p. 8).


One advantage of formative evaluation is that the events are recent, thus guarding accuracy and preventing distortion by time. Another major advantage of formative evaluation is that the results can be used to improve student performance, program of instruction, or learning outcome before the program or course has concluded (Gronlund & Waugh, 2009; Sims, 1992). Disadvantages of formative evaluation include making judgments before the activity (classroom or clinical performance, nursing program) is completed and not being able to see the results before judgments are made. Formative evaluation can also be intrusive or interrupt the flow of outcomes. There is also a chance for a false sense of security when formative evaluation is positive and the results are not as positive as predicted earlier. There are many techniques available for formative evaluation of the classroom and program. For example, in the classroom student learning can be measured at a “point in time” using the one-minute paper method. Students are asked to write about the most important points discussed in class and what concepts need further clarification. This technique provides valuable insight into the teaching–learning process. The instructor has an opportunity to clarify information during the next class. For formative evaluation of a program, many schools of nursing use national standardized testing systems such as ATI (Assessment Technologies Institute). Each semester students take a test that identifies the student’s competencies and their placement nationally. This helps to determine student progression through key concepts within the curriculum. Weaknesses within the curriculum can be identified using content-specific testing as the cohorts progress through the nursing program. Thus formative evaluation provides critical data for ongoing changes necessary to improve student outcomes.



Summative evaluation

Summative evaluation (or assessment), on the other hand, refers to data collected at the end of the activity, instruction, course, or program (Grondlund & Waugh, 2009; Kapborg & Fischbein, 2002, Story et al., 2010). The focus is on the whole event and emphasizes what is or was and the extent to which objectives and outcomes were met for the purposes of accountability, resource allocation, assignment of grades (students) or merit pay or promotion (faculty), and certification (Davis, 1994). Summative evaluation therefore is most useful at the end of a learning module or course and for program or course revision. Summative evaluation of learning outcomes in a course usually results in assignment of a final grade.


The advantages of performing an evaluation at the end of the activity are that all work has been completed and the findings of the evaluation show results. The major disadvantage of summative evaluation is that nothing can be done to alter the results.



Determining when to evaluate

The evaluator must also weigh each evaluation event and determine when evaluation is most appropriate. Typically both formative and summative evaluations are appropriate and lend respective strengths to the evaluation plan.


In determining when to evaluate, the evaluator must also consider the frequency of evaluation. Evaluation can be time consuming, but frequent evaluation is necessary in many situations. Frequent evaluations are important when the learning process is complex and unfamiliar and when it is considered helpful to anticipate potential problems if the risk of failure is high. Finally, important decisions require frequent evaluations (Box 24-2).




Selecting the evaluator

An important element in the evaluation process is the evaluator. Selection of an evaluator involves deciding who should be involved in the evaluation process and whether the evaluator should be chosen from the “inside” (internal evaluator) or from the “outside” (external evaluator). Both have merits.



Internal evaluators

Internal evaluators are those directly involved with the learning, course, or program to be evaluated, such as the students, faculty, or nursing staff. Many individuals (stakeholders) have a vested interest in the evaluation process and could be selected to participate. There are advantages and disadvantages associated with internal evaluators, and often several evaluators are helpful to obtain the most accurate data.


Advantages of using internal evaluators include their familiarity with the context of the evaluation, experience with the standards, cost effectiveness, and potential for less obtrusive evaluation. Additionally, the findings of evaluation can be acted on quickly because the results are known immediately. Disadvantages of using internal evaluators include bias, control of evaluation, and reluctance to share controversial findings. When internal evaluators are chosen and employed, it is important to note their position in the organization and responsibility and reporting lines.



External evaluators

External evaluators are those not directly involved in the events being evaluated. They are often employed as consultants. State, regional, and national accrediting bodies are other examples of external evaluators. The advantage of using external evaluators is that they do not have a bias, are not involved in organizational politics, may be very experienced in a particular type of evaluation, and do not have a stake in the results. Disadvantages of using external evaluators include expense, unfamiliarity with the context, barriers of time, and potential travel constraints. Because evaluators are so critical to the evaluation process, faculty should select evaluators carefully. Box 24-3 lists questions to ask when selecting an evaluator.




Choosing an evaluation design, framework, or model

This step of the evaluation process involves selecting or developing an evaluation model. An evaluation model represents the ways the variables, items, or events to be evaluated are arranged, observed, or manipulated to answer the evaluation question. A model serves to clarify the relationship of the variables to be evaluated and provides a systematic plan or framework for the evaluation.


Evaluation models for nursing education may be found in the nursing literature or may be developed by nurse educators for a specific use. Although evaluation models have been adapted from those used in education (Guba & Lincoln, 1989; Madaus, Scriven, & Stufflebeam, 1988; Scriven, 1972; Stake, 1967) and business, nursing education evaluation models reflect closely the aspects of nursing education and practice that are being evaluated (Billings, Connors, & Skiba, 2001; Germain, Deatrick, Hagopian, & Whitney, 1994; Kapborg & Fischbein, 2002).


Using an evaluation model has several advantages. A model makes variables explicit and often reflects a priority about which variables should be evaluated first or most often. A model also gives structure that is visible to all concerned; the relationships of parts are evident. Using an evaluation model helps focus evaluation. It keeps the evaluation efforts on target: those elements that are to be evaluated are included; those not to be evaluated are excluded. Finally, a model can be tested and validated.


A model should be selected according to the demands of the evaluation question, the context, and the needs of the stakeholders. Several models are used in nursing evaluation activities; they are described briefly here. For detailed information and an example of the use of one model, see Chapter 28.



Theory-driven model

Chen supports the use of a theory-driven model during evaluation and provides “information on not only the performance or merit of a program but on how and why the program achieves such a result” (Chen, 2004, p. 415). According to Rosas (2005), “a critical requirement of theory-driven evaluation is the development and articulation of a clear theory” (p. 390). Thus the evaluation process flows from a theory-based evaluation of program curriculums or instructional methods. The theory will direct the evaluation process from identifying variables to be measured to the final report (Stufflebeam, 2001). Various theories or models provide the structure for the evaluation processes. Theories can be grounded in the social sciences, in nursing, or in business.



Using logic models

McLaughlin and Jordan (as cited in Wholey, Hatry, & Newcomer, 2004) recommend that, before an evaluation is conducted, it is helpful to use a “logic model” approach as an advance organizer. The logic model is a tool that is useful for conceptualizing, planning, and communicating with others about their program. McLaughlin and Jordan further describe one of the first steps in the model as the development of a flowchart that clarifies and summarizes key elements of a program such as resources and other program inputs, program activities, and the intermediate outcomes as well as the end outcomes that the program strives to achieve. It is also capable of showing assumed cause-and-effect linkages among elements in the model. Sanders and Sullins (2006) explain that in order to clarify a program, the logic model describes inputs (resources—fiscal and human—needed to run the program as well as equipment, books, materials), activities (what you are doing), outputs (student demographics, contact hours, assignments, tests), initial outcomes (change in students from activities), intermediate outcomes (longer-term student outcomes), and finally ultimate outcomes (vision for students who have completed the program). This model is extremely helpful when designing a program evaluation.



Decision-oriented models: cipp

The concepts of the CIPP model (context, input, process, and product) facilitate delineating, obtaining, and providing useful information for judging decision alternatives (Stufflebeam, 1971; Stufflebeam & Webster, 1994). Context evaluation identifies the target population and assesses its needs. For example, the target population of first-year college students is identified and, with surveys, interviews, and focus groups, their needs are assessed. Input evaluation identifies and assesses system capabilities, alternative program strategies, and procedural designs for implementing the strategies. In this case, a college would identify and assess its ability or capacity to start a weekend college program. A plan of action would be designed to implement the new program. Process evaluation detects defects in the design or implementation of the procedure. Product evaluation is a collection of descriptions and analyses of outcomes and correlates them to the objectives, context, input, and process information, resulting in the interpretation of results.


The CIPP model measures the weaknesses and strengths of a program, identifies the needs of the target population, identifies options, and provides evidence of beneficial results or lack thereof. In this model, evaluation is performed in the service of decision making; for this reason it should provide information useful to decision makers. Evaluation is also a cyclic, continuing process and therefore must be implemented through a systematic program.


Singh (2004) stated that there were “4 key factors in successfully conducting a program evaluation that is based on the CIPP model” (p. 2). The key factors are (1) “create an evaluation matrix,” (2) establish a group to direct the evaluation, (3) “determine who will conduct the evaluation,” and (4) make certain that the evaluators “understand and adhere to the program evaluation standards of utility, feasibility, propriety, and accuracy” (p. 2). The article included several useful tables of questions, data sources, and collection methodologies, thus providing clarity and a systematic design for the use of CIPP for evaluation of nursing programs.



Client-centered (or responsive evaluation) models

Stake’s (1967) countenance model focuses on the goals and observed effects of the program being evaluated in terms of antecedents, transactions, and outcomes (Stufflebeam & Webster, 1994). Antecedents are conditions that exist that may affect the outcome; for example, students with prior college experience will affect the outcome of freshmen scores. Transactions are all educational experiences and interactions, and outcomes are the abilities, achievements, attitudes, and aspirations of students that result from the educational experience. The purpose of this model is to promote an understanding of activities in a given setting. Case studies and responsive evaluations (Stake, 1967) elicit information about the program from those involved in this action-research approach to educational evaluation. In this model, evaluators “interact continuously with, and respond to, the evaluative needs of the various clients, as well as other stakeholders” (Stufflebeam, 2001, p. 63).


The Cybernetic Model by Veney and Kaluzny (1991; as cited in Jones & Beck, 1996) is a problem-oriented model focused on immediate feedback within a system. The three phases include:



Feedback results are used to make changes to the inputs, outputs, processes, or desired goals.




Naturalistic, constructivist, or fourth-generation evaluation models

Fourth-generation evaluation is a “sociopolitical process that is simultaneously diagnostic, change-oriented, and educative for all the parties involved” (Lincoln & Guba, 1985, p. 141). Fourth-generation evaluation takes into account the values, concerns, and issues of the stakeholders involved in the evaluation (students, faculty, clients, and administrators). The result is a construction of and consensus about needed improvements and changes (Clendon, 2003). Both the evaluator and the stakeholders are responsible for change.


Fourth-generation evaluation incorporates techniques of evaluator observation, interviews, and participant evaluations to elicit views, meanings, and understanding of the stakeholders. As a result, the evaluation becomes not the opinion or judgment of the evaluator but the working toward meaning, understanding, and consensus of all involved in the process. This responsive evaluation informs and empowers the stakeholders for reflection and change. Haleem et al. (2010) determined that a constructivist evaluation approach improved their NCLEX-RN pass rate by 40%. They created a working retreat that involved all faculty in the evaluation process. The goal was to evaluate their entire nursing program and then work on evidence-based solutions to identified problems and thus improve program outcomes. As a group, they evaluated courses, objectives, instruction, curriculum, NCLEX scores, credit loads, courses, sequencing of content, overlap in content, et cetera. The entire process led to an informed, involved faculty. As a result, faculty were stakeholders in the evaluation process and thus collaborators in change. Lessons learned were expressed as “take the time to evaluate the program in a meaningful way, work as a team, and listen to other faculty and the students. These lessons made the difference in promoting student success” (p. 121).


Specific changes that resulted in improved scores involved a multifaceted approach. Changes instituted by Haleem et al. (2010) to improve student learning included the following:



1. Ninety percent of the course grade for each clinical course was based on objective testing.


2. Implementing application and analysis-level test items only for nursing exams.


3. Each clinical course was assigned homework to include case studies.


4. All students are required to complete 700 NCLEX-RN practice questions.


5. A workshop was developed that focused on a “good thinking” approach to questions.


6. Comprehensive standardized examinations to be used in each clinical course and at the end of the program. The results are computed as 10% of the student’s grade.


7. Students are required to take an NCLEX-RN review course prior to the NCLEX-RN exam.


8. A new course entitled Boot Camp was developed: a 10-week course meeting 16 hours per week and focusing on content reviews, testing, and remediation. A minimum of 3000 review questions are required for each student.


9. Students are allowed to repeat only one nursing course. If a student receives a C in two nursing or cognate courses, he or she is dismissed from the program.

Stay updated, free articles. Join our Telegram channel

Feb 12, 2017 | Posted by in NURSING | Comments Off on The evaluation process: an overview

Full access? Get Clinical Tree

Get Clinical Tree app for offline access