Program Evaluation



349CHAPTER 16






Program Evaluation


Sarah B. Keating







OBJECTIVES






Upon completion of Chapter 16, the reader will be able to:



1.  Analyze common definitions, concepts, and theories of quality assurance and program evaluation


2.  Analyze several models of evaluation for their utility in nursing education


3.  Compare research to program evaluation processes


4.  Justify the rationale for strategic planning and developing a master plan of evaluation for educational programs


5.  Compare the roles of administrators and faculty in program evaluation


6.  Apply the guidelines and major components of a master plan of evaluation to a nursing education program







OVERVIEW






Chapter 16 reviews definitions, concepts, and theories related to evaluation and quality assurance as they apply to nursing education. Conceptual models of evaluation; utilization of standards, criteria, and benchmarks; comparison of evaluation research and program evaluation; and types of program evaluation and their purposes are included. The chapter continues with a discussion of strategic planning, the roles of administrators and faculty in evaluation, and the development of master plans of evaluation. As discussed in the section overview, educational evaluation occurs while assessing the program for its quality, currency, relevance, projections into the future, and the need for possible revisions in light of these factors. While the administration usually assumes the leadership for strategic planning, faculty becomes part of the process, especially as it relates to responding to the information provided by the evaluation of the curriculum and the future plans for the institution. A master plan of evaluation provides the information necessary for curriculum evaluation and revision if indicated, and for program or institutional strategic planning.


350COMMON DEFINITIONS, CONCEPTS, AND THEORIES RELATED TO EVALUATION AND QUALITY ASSURANCE


While many of the terms, concepts, and theories of educational evaluation originated from business models, they have been adapted to education, especially in light of the emphasis on outcomes. The following definitions are commonly used terms in evaluation as they apply to nursing education. To initiate the discussion, the first term to consider is evaluation as compared to assessment. Evaluation is a process by which information about an entity is gathered to determine its worth. It differs from assessment in that the end product of evaluation is a judgment of its worth, while assessment is a process that gathers information that results in a conclusion such as a nursing diagnosis or problem identification. It does not end with a judgment but rather a conclusion. Weiner (2009) provides an overview of assessment processes and purposes in academe with descriptions of the roles of faculty and administrators in the establishment of a “culture of assessment.” He lists some of the curricular and co-curricular activities that need to be part of assessment and that they must be included in the budgeting process. Student learning outcomes in courses and at the end of the program are essential elements in assessment and he provides samples of measurable student outcomes.


Quality is a term that takes on many meanings depending upon the context in which it is used. For the purposes of this chapter and in the interest of simplicity, the Merriam-Webster’s Online Dictionary (2014) third definition of quality is used: “a high level of value or excellence.” Examples of the measures of quality include the standards, by which an entity is measured, comparison to other like entities, and consumer expectations. Quality control is “an aggregate of activities designed to insure adequate quality in … products.” Quality assurance is “the systematic monitoring and evaluation of the various aspects of a project, service, or facility to ensure that standards of quality are being met” (Merriam-Webster). For the purposes of the evaluation of nursing education and this textbook, total quality management is defined as an educational program’s commitment to and strategies for collecting comprehensive information on the program’s effectiveness in meeting its goals and the management of its findings to ensure the continued quality of the program.


Formative and summative evaluation terms are used frequently when evaluating educational programs. The classic and still used definitions for the terms were developed by Scriven (1996). He describes formative evaluation as “intended–by the evaluator–as a basis for improvement” (p. 4). For example, in nursing, the faculty compares students’ grades in prerequisites to their grades in nursing courses to determine if certain levels of achievement in prerequisites influence grades in nursing. Scriven describes summative evaluation as a holistic approach to the assessment of a program and it uses results from the formative evaluation. Continuing with the examples from nursing, faculty can evaluate the development of critical thinking skills in its graduates as a product of the educational program. In this instance, these skills would need to be measured both before and after the program to determine an increase in the skills. Both formative and summative types of evaluation “involve efforts to determine merit or worth, etc.” (Scriven, 1996, p. 6). Scriven points out that summative evaluation can serve as formative evaluation. For example, if a nursing program finds that graduates’ clinical decision-making 351skills are weak (summative evaluation), it can use that information to analyze the program (formative evaluation) for the strategies utilized to promote these skills throughout the program and make improvements as necessary.


Additional definitions commonly used in evaluation are goal-based evaluation and goal-free evaluation. Scriven (1974) described goal-based program evaluation as that which focuses only on the examination of program goals and intended outcomes while an alternative method could be used to examine the actual effects of the program. These effects (goal-free evaluation) include not only the intended effects, but also unintended effects, side effects, or secondary effects. An unintended effect in nursing might be an increase in the applicant pool owing to the community’s interactions with students in a program-sponsored, nurse-managed clinic. While this was not a stated goal of the program, it was a positive, unintended outcome.


CONCEPTUAL MODELS OF EVALUATION


Conceptual Models


For years, nursing education programs used many of the models of evaluation developed in health care and education. Examples were Donabedian’s (1996) Structure, Process, and Outcome model for health care evaluation and Stufflebeam’s (1971) Context, Input, Process, and Product (CIPP) educational model. Some of these models continue to serve nursing well but as nursing develops the uniqueness of the discipline, it is using its own models for evaluation. Most of the models are based on accreditation or professional organizations’ standards, essentials, or criteria to evaluate outcomes. For example, Kalb (2009) describes using the Three C’s Model, that is, context, content, and conduct for a comprehensive evaluation of the curriculum and program. The Kalb’s school of nursing used the model to integrate its three nursing programs (associate degree in nursing [ADN], bachelor of science in nursing [BSN], and master of science in nursing [MSN]) according to the National League for Nursing Accrediting Commission’s (NLNAC’s) (2008) accreditation standards and there were plans to use it for developing the doctor of nursing practice (DNP) as well. The evaluation processes in addition to the accreditation standards included the organizational structure, the curriculum, courses, faculty, staff, students, and ANAs (2004) professional standards.


DeSilets (2010) describes the use of a hierarchal model, the Roberta Straessle Abruzzese model (RSA), which moves evaluation from the simple to the complex and assesses the processes, content, outcomes, impact, and finally, total program evaluation. The model is comprehensive and the author describes ways in which data are collected, analyzed, and used for programmatic decisions.


Horne and Sandmann (2012) conducted an integrative review of the literature to ascertain if graduate nursing programs evaluate their effectiveness including reaching outcomes, cost effectiveness, student and faculty satisfaction, making decisions based on evaluation outcomes, and measuring the quality of the program. Prior to their report on the findings, they define evaluation and its various processes as well as describing some of the models of evaluation reported in their 352review of the articles. They found a paucity of articles related to program evaluation but found a few helpful ideas for programs to measure program quality and effectiveness in nursing.


Pross (2010) describes the Promoting Excellence in Nursing Education (PENE) model for program evaluation that centers on the achievement of excellence through three major factors. They are (a) visionary, caring leadership, (b) an expert faculty, and (c) a dynamic curriculum. She goes on to explain the factors in detail with the dynamic curriculum one that not only meets, but exceeds standards, criteria, and regulations. She emphasizes the need for measurable program outcomes. For the dynamic curriculum to succeed, it must have visionary leadership to reach its goals and the expert faculty to continually improve performance, participate in creative and innovative scholarship, and provide exemplary service.


Benchmarking


Programs can set benchmarks to measure their own success and standards of excellence, or compare themselves to similar institutions. Benchmarks can be used in competition with other programs for recruiting students or seeking financial support or they can be used to motivate the members of the institution to strive toward excellence. Yet another function of benchmarking is the ability to collaborate with other institutions to share strengths with each other and to continually improve programs. Benchmarks include the financial health of the institution; applicant pool; admission, retention and graduation rates; commitment to diversity; student, faculty, staff, and administrators’ satisfaction rates; NCLEX and certification pass rates; and so forth.


An example of a benchmarking project when developing or revising DNP programs is presented by Udlis and Manusco (2012) from their website survey of DNP programs across the United States. The authors present a summary of the history of the development of the degree and then present their findings from the 2011 website survey of 137 programs. Findings are organized according to type of program, program length and number of credits, location by region of the United States, cost, platform for offering the program, electives availability, number of credits, and the practice course name. Their study provides benchmarks and trends in DNP education programs as programs continue to evolve and increase across the United States.


Evaluation Processes Models


In addition to using conceptual models for program and curriculum evaluation, some institutions choose to use process models for evaluation activities. Holden and Zimmerman (2009) describe the Evaluation Planning Incorporating Context (EPIC) model for the evaluation process. Included in their text are examples of the use of the model for assessing an educational program and a community-based service agency. These can apply to nursing as well. The EPIC model “assesses the context of the program, gathers reconnaissance, engages stakeholders, describes the 353program, and focuses the evaluation” (p. 2). The model is especially helpful by its provision of guidelines for conducting an evaluation.


The Centers for Disease Control and Prevention (CDC) (2014) developed a framework for program evaluation that is useful for educators. While it focuses on the processes of evaluation, it includes a framework for assigning value or worth to the findings from the process. The major steps of the process are (a) engage the stakeholders, (b) describe the program, (c) focus the evaluation plan, (d) gather credible evidence, (e) justify conclusions and recommendations, (f) ensure use and share lessons learned and, finally, the cycle begins again with engage the stakeholders. The steps of the model incorporate standards against which the program is evaluated. For nursing programs, there are many; they include accreditation standards or criteria and professional/educational essentials or standards.


Formative Evaluation for Nursing Education


Formative and/or process evaluation strategies include course evaluations; student achievement measures; teaching effectiveness surveys; staff, student, administration, and faculty satisfaction measures; impressions of student and faculty performance by clinical agencies’ personnel; assessment of student services and other support systems; students’ critical thinking development and other standardized tests such as gains in knowledge and skills; NCLEX readiness; satisfaction surveys of families of students; retention/attrition rates; and cost effectiveness of the program. Antecedent or input evaluation items include entering grade point averages (GPAs), American College Testing (ACT) (2014), Scholastic Achievement Test (SAT) (The College Board, 2014), and Graduate Record Examination (GRE) (2014) scores for applicants and accepted students; retention and/or attrition rates; scholarship, fellowship, and loan availability; and endowments and grants for program development and support.


Praslova (2010) responds to a lack of specific criteria to measure student learning outcomes and, therefore, program effectiveness. Measuring student learning outcomes is part of formative evaluation activities that lead to summative evaluation and program outcomes. Praslova takes Kirkpatrick’s model for training and adapts it to higher education to measure program effectiveness. She reviews the definition and purpose of assessment and describes the important role of the stakeholders in conducting assessments. The model consists of four criteria, that is, reaction, learning, behavior, and results. The four criteria are then explained and examples of how they are adapted to higher education with suggested assessment tools are offered. The model could be used to assess specific outcomes related to student learning. Another model for formative evaluation is presented by McNeil (2011), who adapts Bloom’s taxonomy as a measurement of student learning outcomes. The author presents a 12-step evaluation model with the taxonomy as a guideline for both program and course evaluations. Although the model limits itself to learning outcomes and not necessarily the other factors that influence the quality of the total program including faculty, staff, student characteristics, and the 354infrastructure that supports the educational program, it is a very useful model for measuring student learning outcomes.


SUMMATIVE EVALUATION USING STANDARDS, ESSENTIALS, AND CRITERIA


Summative evaluation differs from formative evaluation with its purpose to assess and judge the final outcomes of the educational program, while formative evaluation assesses the processes used to achieve the final outcome. The “product” of the educational program can be measured according to the overall goal and objectives of the program/curriculum, standards and criteria of regulating bodies such as boards of nursing, professional standards such as nursing organizations’ code of ethics and practice standards, essentials or competencies defined by professional and educational organizations, and last but not least, accreditation standards and criteria.


Measures to determine final outcomes of the program include follow-up surveys of the success rates of the graduates including their pass rates on licensure and certification exams, employers’ and graduates’ satisfaction with the program, graduates’ performance, and alumni’s accomplishments in leadership roles, as change agents, professional commitment, and continuing education rates. Additional outcome measures include graduation rates, accreditation and program approval status, ratings of the program by external evaluators or agencies, faculty and student research productivity, community service, and public opinion surveys. Many of these outcome measures can be used to serve as benchmarks for setting achievement levels, for example, 99% pass rates on NCLEX or for comparing the institution to other admired or similar institutions as a measure of quality.


Newhouse (2011) discusses summative evaluation as it applies to assessing computer and technology students’ achievement of knowledge and competency in a technology educational program. He questions the use of written exams to measure students’ abilities in understanding and working with technology and challenges the validity of such measures. A research team worked on the development of digital measures of achievement for these students including an electronic portfolio developed by the student and a computer-based examination. Newhouse and team found statistically significant results for the two methods including improved reliability and validity of the method and student and instructors’ ease of access, marking, and satisfaction.


Stavropoulou and Kelesi (2012) summarize methods of program evaluation with a definition of evaluation that applies to nursing education. They discuss methodologies with a review of quantitative and qualitative evaluation methods, the debate of their use between the two, and the advantages and disadvantages for both. The authors conclude with the idea that both quantitative and qualitative methods should be used and they bring in the notion of triangulation as a method for program evaluation. Schug (2012) describes the process faculty undergoes when developing or revising a curriculum using the NLN’s Accreditation Manual (2012). She presents a table listing the criteria with questions for the faculty to use as guides to evaluate the curriculum. She also mentions the Three C’s Model for assessing and evaluating curricula, that is, context, content, and conduct. There is a plethora of instruments in nursing education programs that are available for 355measuring graduates’ performance and satisfaction as an indication of the program’s success. Nurse educators are urged to review the latest literature in a search for the best tools for collecting and analyzing data to measure the outcomes and the processes used to evaluate the achievement of the outcomes of educational programs. At the same time, while student learning outcomes are the core of program success, other summative factors must be considered such as the program’s quality of faculty, research and scholarly output, staff, support systems, and infrastructure.


TYPES OF PROGRAM EVALUATION


Basically there are two types of program approval evaluation in academe that differ from regulatory and accreditation processes. They are program approval and program review.


Program Approval


Before a new program is initiated, its parent institution must approve it. As reiterated throughout this text, it is the faculty who develop the curriculum for a new program and it should be based on a needs assessment that provides the rationale for why it is needed, how it meets the mission of the institution, and who the key stakeholders are. In addition to the curriculum plan, a budget should accompany it and it is expected that it projects the costs and income for at least the next 5 years to justify its start-up and maintenance.


In academe the usual rounds of approval are as follows. The first round is for the faculty within the originating department/school to approve the proposal; its next round depends upon the hierarchal structure of the institution. The following levels of approval are based on a moderate to large-scale institution and it is understood that smaller institutions may not have as many approval rungs. After faculty approval within the originating program, the proposal may go to a curriculum or program approval committee within its college or division. Preliminary approval may have to be granted by administrators before it enters other formal approval levels in order to determine its economic feasibility and its fit to the mission and/or strategic plan.


After approval at the program’s local level by committees and faculty as a whole, it proceeds to the next level, which is usually a program or curriculum committee at the division or college level. With its approval, it goes to the overall university or college graduate or undergraduate committee for their review and approval. Next, it goes to a subcommittee of the senate that reviews program proposals. On their approval, the senate reviews it for its input and approval and then sends it to the chief executive for academic affairs such as a vice president or provost. On that person’s approval, the president of the institution approves the program. The governing board such as a board of trustees or regents is the final rung of approval and it may have a subcommittee that reviews it with recommendations prior to its going to the full board. These levels of approval are for academic approval only. For professional programs such as nursing, accreditation processes 356and state board of nursing approvals should be initiated along the way to reassure the academic entities that the program is qualified for professional approval and accreditation.


Program Review


Program review in academe occurs on average every 5 years within the parent institution. The purpose for program review is to ensure the quality and sustainability of the program. Faculty prepares an overview of the program especially related to enrollments, the quality of the faculty, student learning outcomes, and enrollment and graduation projections (Weiner, 2009). When economic times are tough, these reviews help to demonstrate the relationship of the program to the mission of the institution, its contributions to the community, and the quality of the program. Nursing often finds itself having to justify its program owing to the relatively small faculty to student ratios required when clinical supervision is factored in. Nursing programs need data as evidence to support the program, its cost-effectiveness, its place in meeting the mission of the institution (serving the public), and its contributions of student enrollments to the core general education and prerequisite requirements.


The requirements and processes for program approval and review use the same data sets as many of the other assessment and evaluation activities related to professional accreditation and standards of excellence. Thus, it is not unusual for a parent institution to request copies of the most recent self-studies and accreditation reports that either substitute for program review criteria or supplement the requirements. Program approval and review should be integrated into the school’s master plan of evaluation so that the data sets can serve all of the required purposes for assessment.


Bers (2011) reviews the purposes of program review for community colleges that applies to other institutions of higher education as well. The major purposes are to assess the program’s effectiveness both internally and externally, to meet both internal and external regulation or requirements, to exhibit accountability, and to utilize its public relations. She discusses both formative and summative evaluation as it applies to measuring program outcomes and offers models of evaluation that institutions can use for program review. Bers points out that many of these models are combinations of several models. The models for evaluation that she lists include the Strengths, Weaknesses, Opportunities, and Threats (SWOT) technique; free-form (goal-free) evaluation; outside expert review; self-study; and so on. Her article is very useful with many practical guidelines for programs undergoing program review.


Research and Program Evaluation


Sometimes there is confusion between evaluation research and the process of evaluation. The evaluation process starts with an identification of the program or entity that is to be evaluated, the purpose of the evaluation, and who the stakeholders are within the program. It requires many of the same steps 357of research including a review of the literature, identification of a theory or model of evaluation to guide the process, collection and analysis of credible data related to the program, synthesizing the analysis to come to a conclusion, and a judgment with recommendations for further assessment and strategies for improvement.


Research in evaluation, on the other hand, differs from the evaluation process. It begins with a description of a problem and a research question, its purpose for investigation, and follows with the usual steps of the research process, that is, literature review, theoretical/conceptual framework, methodology, data collection and analysis, findings, and recommendations.


Research in evaluation is usually viewed as applied research and differs from basic research as it is searching for practical solutions to problems. Donaldson, Christie, and Mark (2009) describe applied research and evaluation processes and the continuing debates among the experts on the validity of quantitative and qualitative methodologies and their application to evaluation. They discuss current emphases in the disciplines on evidence-based practice, what constitutes credible evidence, evaluation theories, and their influences on applied research in evaluation. In the end, their text presents the latest in evaluation theories and their relevance to the search for evidence-based practice in education.


Spillane et al. (2010) provide an example of a combined program evaluation and research project in their study of the use of mixed methodologies for evaluation. The project was an evaluation of a professional development education program for principals of schools. It was a randomized controlled trial (RCT) with two groups of principals, one group randomly assigned to a treatment group and the other group experiencing the training program a year later. The latter group served as the control group. The study was theory driven using a logic model that took into account not only the program but the principals’ previous experience, faculty and student characteristics, and the leader’s background. The researchers collected both qualitative and quantitative data. Analyses consisted of studying both types of data separately, then combining data to quantify qualitative data and to place the quantitative data into a qualitative context. This resulted in validation and contextualization of both types of data and their mix. Triangulation of the results pointed out the value of mixed methods. It provides an example of the differences between research and program evaluation and how research applies to practice.


Strategic Planning


Strategic planning for an institution provides the guidelines for carrying out the mission of the institution and at the same time can be used to evaluate how well the institution is meeting its mission and goals. Strategic planning usually begins with the top executive and management team providing the leadership for its development. In academe, the parent institution’s top administrators (president, vice presidents, provosts, deans, etc.) initiate the plan, which is, in turn, implemented throughout the institution by the various academic divisions. Each division may choose to develop its own strategic plan; however, it should be 358congruent with that of its parent but unique to the program’s mission and goals. The first step in strategic planning is to develop a vision statement. The vision statement presents a description of where the institution will be in the future, usually 3 to 5 years hence, and incorporates the core values of the institution. The leadership team may wish to engage other stakeholders in the process and in the planning to meet the vision.


Varkey and Bennett (2010) discuss the strategic planning process applied to health care agencies but the same processes can be adapted to educational milieus. They advise that the leadership team create a sense of urgency for developing the plan with the first session centered on developing a vision that is of the highest order and reflects the team’s belief of where the organization needs to be in the future. Once the vision is developed through brainstorming and coming to consensus, the planning process commences.


A positive, comprehensive, and inclusionary approach to strategic planning is described by Harmon, Fontaine, Plews-Ogan, and Williams (2012) in a summary of the University of Virginia’s School of Nursing strategic planning process. They adapted the Appreciative Inquiry (AI) model used in business for planning and promoting a positive milieu (Ludema, Whitney, Mohr, & Griffin, 2003). The authors describe the process they underwent for planning, holding, and summarizing the outcomes from a summit that developed strategic plans for the School of Nursing’s future. It is a very useful model for other institutions undergoing the strategic planning process.


Stuart, Erkel, and Schull (2010) tie the analysis of costs of programs and their financial viability to the process of strategic planning. They describe how their College of Nursing at the Medical University of South Carolina examined existing programs, their need, and cost-effectiveness in planning for the future and for providing a rationale for establishing a doctor of nursing practice (DNP) program. This type of analysis contributes to strategic planning for the future that maintains and develops educational programs in response to current and future needs, yet operates within a realistic budget that utilizes existing resources.


Research and scholarly production by faculty and students as part of strategic planning are described by Kulage et al. (2013). The Columbia University School of Nursing dean and faculty undertook a project to update research and scholarly priorities based on new programs in the school, changes in faculty, trends in health care, and calls for interdisciplinary and inter-institutional collaboration. The process they undertook, the workgroups that were organized to carry out the tasks, and the methods for measuring outcomes are described as part of their strategic planning process.


Although nursing programs may not have a strategic plan per se, it is wise to have goals set for the future with action plans to carry them out. These goals and action plans are reviewed at least annually to assess the progress toward the goals and to adjust or develop new goals as the program and its needs and constituencies change. To avoid the pitfall of exquisite planning processes that fail to implement the plan, the use of a master plan of evaluation provides the structure, details, and timelines for assessing and evaluating the progress that the program is making toward reaching its vision and short-term and long-term goals.


359MASTER PLAN OF EVALUATION


Rationale for a Master Plan of Evaluation


When developing a master plan of evaluation, one of the major tasks to integrate into the plan is to meet accreditation or program approval standards. These standards or criteria are the baseline requirements of the profession to ensure that programs are of sufficient quality to meet the expectations of the discipline. They also demonstrate to the public that a program is recognized by external reviewing bodies and thus the quality of its graduates meets educational and professional standards. Graduation from an accredited program is usually one of the admission standards for continued degree or education work. Many funding agencies for programs require accreditation as it indicates that the program is of high enough quality to assume the responsibility for the administration of grants and completion of projects. Most accrediting agencies require that a program have a master plan of evaluation and even if it is not required, a master plan helps to identify the components that need to be evaluated, who will do the data collection and when, what methods of analysis of the data will be employed, and the plans for responding to the findings for quality improvement (Accreditation Commission for Education in Nursing [ACEN], 2014 and Commission on Collegiate Nursing Education [CCNE], 2014). Having a master plan of evaluation in place greatly facilitates these processes when submitting accreditation self-study reports, program approval reports, or proposals for funding.


With today’s emphasis on outcomes, the evaluation process is essential to measuring success, establishing benchmarks, and continually improving the quality of the program. A master plan of evaluation is used to provide data for faculty’s decision making as part of an internal review and for meeting external review standards. It is important to have a master plan that continually monitors the program so that adjustments can be made as the program is implemented and it is part of the total quality management process. It is equally important to measure outcomes in terms of meeting the vision, strategic plans, goals, and objectives of the program, and certain benchmarks that help to pinpoint the quality of the program.


Components of a Master Plan of Evaluation


The master plan must specify what is being evaluated and an organizing framework is useful so that as nearly as possible, no crucial variable is omitted for review. Additionally, it is important to identify the persons who will:



1.  Collect the data


2.  Analyze the findings


3.  Prepare reports


4.  Disseminate the reports to key people


5.  Set the timelines for collection, analysis, and reporting of the data

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 3, 2017 | Posted by in NURSING | Comments Off on Program Evaluation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access