Educational program evaluation



Educational program evaluation


Marcia K. Sauter, PhD, RN, Nancy Nightingale Gillespie, PhD, RN and Amy Knepp, NP-C, MSN, RN


The purpose of this chapter is to provide information on how to conduct comprehensive evaluation of nursing education programs. A brief history of program evaluation and examples of models for program evaluation will be followed by a description of a theory-driven approach to program evaluation, which has served as a framework for the evaluation of nursing education programs at a private university since 2000. The evaluation plan was originally adapted from Chen’s (1990) theory-driven model for program evaluation, which provides a mechanism for evaluating all program elements, for determining the causal relationships between program elements, for determining program effectiveness, and for identifying strategies to improve program quality. The evaluation plan has demonstrated long-term sustainability and has been easily adapted to changes in accreditation requirements. The evaluation plan has been refined during the past decade while maintaining the overall framework.




Definition of terms


A nursing education program is any academic program in a postsecondary institution leading to initial licensure or advanced preparation in nursing. Program evaluation is systematic assessment of all components of a program through the application of evaluation approaches, techniques, and knowledge in order to improve the planning, implementation, and effectiveness of programs (Chen, 2005). Program evaluation theory is a framework that guides the practice of program evaluation. A program evaluation plan is a document that serves as the blueprint for the evaluation of a specific program. Program theory is a set of assumptions that describes the elements of a program and their causal relationships.



Purposes and benefits of program evaluation


The purpose of program evaluation is to improve program effectiveness and demonstrate accountability. Evaluation may be developmental, designed to provide direction for the development and implementation of a program, or outcome-oriented, designed to judge the merit of the total program being evaluated. The focus of program evaluation is dependent on the stage of program implementation, beginning with program planning, through early implementation, and ending in mature program implementation (Chen, 2005). The more advanced a program is in its implementation, the more complex becomes the program evaluation. Specific purposes of program evaluation are as follows:




Relationship of program evaluation to accreditation


Accrediting bodies exert considerable influence over nursing programs. Accrediting bodies include the state board of nursing, the National League for Nursing Accrediting Commission (NLNAC), the Commission on Collegiate Nursing Education (CCNE), and regional accrediting bodies such as the Higher Learning Commission. Nursing education programs must be approved by the state board of nursing to be able to operate and by the regional accrediting body to seek national accreditation. National accreditation by NLNAC or CCNE is voluntary, but the public perception of the school is linked, in part, to this accreditation. Certainly, schools that have a mission to prepare students for graduate education and schools that wish to compete for external funding as a part of their mission will want to meet all levels of accreditation.


Nursing programs have historically been too dependent on accreditation processes to guide program evaluation efforts (Ingersoll & Sauter, 1998). Some nursing programs do not fully engage in program evaluation until preparation of the self-study for an accreditation site visit has begun. To fulfill its purposes, program evaluation must be a continuous activity. Program evaluation built solely around accreditation criteria may lack examination of some important elements or understanding of the relationship between elements that influences program success. Nevertheless, building the assessment indicators identified by these bodies into the evaluation process ensures ongoing attention to state and national standards of excellence.



Historical perspective


The earliest approaches to educational program evaluation were based on Ralph Tyler’s (1949) behavioral objective model, which focused on whether learning experiences produced the desired educational outcomes. Tyler’s behavioral objective model was a simple, linear approach that began with defining learning objectives, developing measuring tools, and then measuring student performance to determine whether objectives had been met. Because evaluation occurred at the end of the learning experience, Tyler’s approach was primarily summative. Formative evaluation, which includes testing and revising curriculum components during the development and implementation of educational programs, became popular during the 1960s. This trend continued into the 1970s, when the Phi Delta Kappa National Study Committee on Evaluation concluded that meaningful educational evaluation was rare and encouraged educational institutions to continue formative evaluation by focusing on the process of program implementation (Stufflebeam, 1983).


Outcomes assessment became the focus of educational evaluation in the 1980s. In 1984 the National Institute of Education Study Group on the Conditions of Excellence in American Postsecondary Education endorsed outcomes assessment as an essential strategy for improving the quality of education in postsecondary institutions (Ewell, 1985). By the mid-1980s numerous state legislatures began mandating outcomes assessment for public postsecondary institutions (Halpern, 1987) and the regional accrediting agencies began mandating outcomes assessment in their accreditation criteria (Ewell, 1985). Although the focus on outcomes assessment was growing rapidly, initial efforts at implementing outcomes assessment were not successful because educators experienced difficulty in developing appropriate methods for performing outcomes assessment and in obtaining adequate organizational support to implement assessment (Terenzini, 1989). The focus on outcomes assessment led some institutions to confuse outcomes assessment with comprehensive program evaluation. Nursing educators were also influenced by the outcomes assessment movement. Publications from the National League for Nursing (NLN) called for measurement of student outcomes (Waltz, 1988) and described measurement tools for assessing educational outcomes (Waltz & Miller, 1988).


By the early 1990s some of the issues surrounding outcomes assessment had been addressed and successful efforts in outcomes assessment had occurred. As nursing education continued to follow the trend in higher education, the NLN added assessment of learning outcomes to its accreditation criteria in 1991. The CCNE also included outcomes assessment in its initial accreditation standards, first published in 1997.


The Wingspread Group on Higher Education (1993) challenged providers of higher education to use outcomes assessment to improve teaching and learning. Nevertheless, outcomes assessment was not the final solution to improving program quality. Many postsecondary institutions continued to struggle with outcomes assessment, and those that were able to implement it were often unable to identify any academic improvements as a result of the assessment program (Tucker, 1995). Toward the end of the decade, approaches to organizational effectiveness, especially Deming’s continuous quality improvement model, began to influence a more comprehensive approach to program evaluation (Freed, Klugman, & Fife, 1997).


As a result of the growing emphasis on program evaluation in the 1980s and 1990s, university programs were developed to prepare individuals in program evaluation (Shadish, Cook, & Leviton, 1991). As program evaluation became a distinct field of study, theories were developed to guide the practice of evaluation. Some of these theories include Borich and Jemelka’s (1982) systems theory; Stufflebeam’s context, input, process, and product (CIPP) model (1983); Guba and Lincoln’s fourth-generation evaluation framework (1989); Patton’s qualitative evaluation model (1990); Chen’s theory-driven model (1990); Veney and Kaluzny’s cybernetic decision model (1991); and Rossi and Freeman’s social research approach (1993). Perhaps because of the nursing profession’s emphasis on use of theory to guide practice, the need for program evaluation theory to guide evaluation practices in nursing education was identified in the nursing literature as early as 1978. Friesner (1978) reviewed five evaluation models: (1) Tyler’s behavioral objective model, (2) the NLN accreditation model, (3) Stufflebeam’s CIPP model, (4) Scriven’s goal-free evaluation model, and (5) Provus’s discrepancy evaluation. Friesner (1978) concluded that no single model could effectively guide the evaluation of nursing education and recommends that nursing educators blend elements from one or more of the models.


In the early 1990s several articles about program evaluation theory appeared in the nursing literature. Watson and Herbener (1990) reviewed Provus’s discrepancy model, Scriven’s goal-free evaluation model, Stakes’ countenance model, Staropolia and Waltz’s decision model, and Stufflebeam’s CIPP model. These authors concluded that any of these models could be useful and recommended that nursing educators choose a model that best fits their needs. In contrast, Sarnecky (1990) explicitly recommended Guba and Lincoln’s (1989) responsiveness model after comparing it with Tyler’s behavioral objective model, Stake’s countenance model, Provus’s discrepancy model, and Stufflebeam’s CIPP model. Sarnecky believed that the other models did not adequately address the plurality of values among stakeholders and the importance of stakeholder involvement. Bevil (1991) proposed a theoretical framework she adapted from several evaluation theories. Ingersoll (1996) reviewed Borich and Jemelka’s systems approach, McClintock’s conceptual mapping approach, and Chen’s theory-driven model. Addressing issues about the reliability and validity of assessment activities, Ingersoll recommended that program evaluation be viewed as evaluation research and that program evaluation theory be used to guide the development and implementation of program evaluation. Ingersoll and Sauter (1998) reviewed Guba and Lincoln’s fourth-generation evaluation, Scriven’s goal-free approach, Norman and Lutenbacher’s theory of systems improvement, Rossi and Freeman’s social science approach, and Chen’s theory-driven model. The authors suggested that Rossi and Freeman’s model and Chen’s model had the most potential for guiding the evaluation of nursing education programs. Ingersoll and Sauter also expressed concern that nursing faculty commonly use accreditation criteria to form the framework for evaluation of nursing education programs. They recommended that program evaluation theory serve this purpose. Ingersoll and Sauter (1998) presented an evaluation plan developed from Chen’s theory-driven model that incorporated the NLNAC’s criteria for baccalaureate programs.


Sauter (2000) surveyed all baccalaureate nursing programs in the United States to determine how they develop, implement, and revise their program evaluation plans. Few nursing programs reported using program evaluation theory to guide program evaluation. However, those educators that did use program evaluation theory were more satisfied with the effectiveness of their evaluation practices.


In the past decade most of the nursing literature related to program evaluation has focused on specific elements of program evaluation, rather than on comprehensive evaluation. Only one article reported a theory-based approach to program evaluation. In 2006 Suhayda and Miller reported on the use of Stufflebeam’s CIPP model in providing a framework for comprehensive program evaluation that would serve undergraduate and graduate nursing programs.



Program evaluation theories


Program evaluation theories are either method-oriented or theory-driven, depending on their underlying assumptions, preferred methodology, and general focus. Method-oriented theories emphasize methods for performing evaluation, whereas theory-driven approaches emphasize the theoretical framework for developing and implementing evaluation. The more popular approaches have been method-oriented (Chen, 1990; Shadish et al., 1991).


Method-oriented approaches usually focus on the relationship between program inputs and outputs and include an emphasis on a preferred method for conducting program evaluation. Many of the method-oriented approaches emphasize quantitative research methods. A few method-oriented approaches recommend naturalistic or qualitative methods for performing program evaluation.


An example of a quantitative method-oriented program evaluation theory is Rossi and Freeman’s (1993) social science model. These authors believe that the use of experimental research methods will produce the most effective program evaluation. The advantage of this approach is that measurement techniques must be reliable and valid, even if experimental design is not used to conduct the evaluation. One of the major limitations of this approach is that the focus on methodology may divert evaluators from other issues, such as recognizing the importance of stakeholder perspective. In addition, experimental designs are often difficult to apply to some aspects of educational evaluation.


An example of a qualitative method-oriented program evaluation theory is Guba and Lincoln’s (1989) fourth-generation evaluation. Guba and Lincoln advocate naturalistic methods for program evaluation. A special focus of their approach is the emphasis they place on integrating multiple stakeholders’ viewpoints into program evaluation. A major advantage of their approach is that using qualitative methodology allows evaluators to achieve a greater depth of understanding of program strengths and limitations within a specific context. The approach is limited because it tends to overlook outcomes assessment, which usually requires more quantitative methodology.


Theory-driven approaches to program evaluation begin with the development of program theory. Program theory is the framework that describes the elements of the program and explains the relationships between and among elements. When this approach is used, program evaluation is intended to test whether the program theory is correct and whether it has been implemented correctly. If the program is not successful in achieving outcomes, a theory-driven approach allows the evaluator to determine whether the program’s failure is due to flaws in the program theory or failure to implement the program correctly. The theory-driven approach often calls for a variety of research methods because evaluators choose the methodology that is best suited to answering the evaluation questions (Chen, 1990).


Chen’s (1990) theory-driven model is one of the most comprehensive models for program evaluation. Although the model was intended for evaluation of social service programs, it is adaptable to educational programs. A brief overview of Chen’s model is included here. The remainder of the chapter describes a nursing education program evaluation plan that was developed from Chen’s theory-driven model. The evaluation plan has been in continuous use since 2000 and has been applied to undergraduate and graduate nursing programs. For a more detailed description of how the evaluation plan was created from Chen’s original theory-driven model, see Sauter, Johnson, and Gillespie (2009).




Theory-driven program evaluation

Chen (1990) defines program theory as a framework that identifies the elements of the program, provides the rationale for interventions, and describes the causal linkages between the elements, interventions, and outcomes. According to Chen, program theory is needed to determine desired goals, what ought to be done to achieve desired goals, how actions should be organized, and what outcome criteria should be investigated. Program evaluation is the systematic collection of empirical evidence to assess congruency between the program’s design and implementation and to test the program theory. Through this systematic collection of evidence, program planners can develop and refine program structure and operations, understand and strengthen program effectiveness and utility, and facilitate policy decision making (Chen, 1990).



Adapting chen’s theory-driven model to program evaluation for nursing education


The following section describes a program evaluation plan for nursing education programs adapted from Chen’s (1990) theory-driven model. The components of the evaluation plan are organized into six evaluation types, which were modified and adapted from Chen’s model. Table 28-1 lists the evaluation types defined by Chen, provides a brief description of the elements of the evaluation type, and demonstrates how Chen’s model was adapted to the evaluation of nursing education programs.



TABLE 28-1


Comparison of Chen’s Theory-Driven Model for Program Evaluation and a Model for Nursing Education















































Chen’s Evaluation Types Components for Evaluation of Nursing Education Programs
Normative outcome evaluation
Mission and goal evaluation
Normative treatment evaluation

Implementation Environment Evaluation Environment Evaluation
Participant dimension
Student dimension
Implementer dimension
Faculty dimension
Delivery mode dimension
Delivery mode dimension
Implementing organization dimension
Organization dimension
Interorganizational relationship dimension
Interorganizational relationship dimension
Micro context dimension
Micro context dimension
Macro context dimension
Macro context dimension
Impact evaluation
Outcomes assessment
Intervening mechanism evaluation
Intervening mechanism evaluation
Generalization evaluation
Generalization evaluation


image




Mission and goal evaluation


Program evaluation must begin by determining that appropriate mission, philosophy, program goals, and outcomes have been defined. The expectations of both internal and external stakeholders must be considered. Internal stakeholders include administrators, faculty, and governing boards. External stakeholders include religious organizations for private schools with religious affiliations, regional accrediting bodies, national discipline-specific accrediting bodies, state education commissions and boards of nursing, the legislature, and professional organizations. There should be congruency between the expectations of stakeholders and the program’s mission, philosophy, goals, and outcomes. For private institutions with religious affiliations, some perspectives may be prescribed and must be included in mission, philosophy, goals, or outcomes.


The mission of the nursing department should be congruent with the university’s mission. Comparison of key phrases in the department’s mission with key phrases in the university’s mission may be done to assess congruency between mission statements. The identification of gaps between the two mission statements provides information about areas where attention is needed. The assessment should be performed periodically and whenever changes are made to either mission statement.


There should be consensus among the faculty regarding the nursing school’s mission and philosophy. A modified Delphi approach to determine the level of agreement among the faculty for each statement in the mission and philosophy is a useful strategy. The Delphi approach is useful for both the development and the evaluation of belief statements (philosophy). This approach seeks consensus without the need for frequent face-to-face dialogue in a manner that protects the anonymity of participants. In this method, questionnaires that list proposition statements about each of the content elements of the belief statement are distributed. A common breakdown of Delphi responses is a five-point range from “strongly agree” to “strongly disagree” so that respondents can indicate their level of support for each proposition. Respondents are provided with feedback about the responses after the first round of questionnaire distribution, and a second round may occur to determine the intensity of agreement or disagreement with the group median responses (Uhl, 1991). After several rounds with interim reports and analyses, it is usually possible to identify areas of consensus, areas of disagreement so strong that further discourse is unlikely to lead to consensus, and areas in which further discussion is warranted. In the evaluation of an established belief statement, the same process will provide data about which propositions continue to be supported, which no longer garner support, and which need to be openly debated (Uhl, 1991). The result provides a consensus list of propositions that either supports the belief statement as it is or suggests areas for revision. Chapter 7 provides further information on development of mission and philosophy.


All accrediting bodies have expectations about mission, philosophy, program goals, and outcomes. The NLNAC (2008) defines Standard I, Mission and Governance, in which it requires that the nursing program provide clear statements of mission, philosophy, and purposes. In addition, the NLNAC has indicated both required and optional outcomes that nursing programs must measure over time to provide trend data about student learning. For example, the required outcomes in the criteria for baccalaureate and higher degree programs are graduation rates, job placement rates, licensure and certification pass rates, and program satisfaction (NLNAC, 2008). The CCNE (2010) also includes in Standard I, Mission and Governance, expectations regarding congruency of the program’s mission, goals, and outcomes with those of the parent institution, professional nursing standards, and the needs of the community of interest.


Professional organizations include the American Nurses Association (ANA), American Association of Colleges of Nursing (AACN), and National Organization of Nurse Practitioner Faculties (NONPF). Program goals and outcomes in baccalaureate degree programs should be congruent with the ANA’s Standards of Practice (ANA, 2010) and the AACN’s Essentials of Baccalaureate Education for Professional Nursing Practice (AACN, 2008a). The same consideration should be given to the AACN’s Essentials of Master’s Education of Advanced Practice Nursing (AACN, 1996) and the Criteria for Evaluation of Nurse Practitioner Programs (NONPF, 2008) for master’s degree programs. The AACN (2006) also provides indicators of quality in doctoral programs in nursing in Essentials of Doctoral Education for Advanced Nursing Practice.


Other important stakeholders include local constituencies, such as health care agencies, that provide clinical learning experiences or employ graduates of the program. A survey of current and potential employers of graduates will help faculty to determine the knowledge and skill requirements of the marketplace. Many institutions establish advisory committees to provide additional information and selected focus groups to add richness to the information. This information is used to ensure that program goals and outcomes are appropriate, to provide input for curriculum planning, and to develop evaluation questions and tools for determining whether market needs are being met.


The mission and program goals should be clearly and publicly stated. Nursing schools that offer several different nursing programs will need to clearly articulate the purpose and program goals of each of these programs. Public announcement of mission and program goals should be available through the Internet and in printed program brochures and catalogues.


Box 28-1 lists the theoretical elements for mission and goal evaluation. Table 28-2 provides a sample evaluation plan for mission and goal evaluation applied to a nursing education program. This sample demonstrates how all elements of the program evaluation plan may be articulated, including the program’s theoretical elements, assessment activities, responsible parties, time frames, and related accreditation criteria. For the remaining evaluation components presented in this chapter, only examples of theoretical elements and methods for gathering and analyzing assessment data relevant to the identified theoretical elements are provided. The theoretical elements and assessment strategies that are suggested here are not all-inclusive but may assist nursing faculty in further development of their own program theory and program evaluation plan.




TABLE 28-2


Mission and Goal Evaluation




























































Program Theory Assessment Strategies Responsible Parties Time Frame Recording and Reporting Accreditation Criteria
The mission of the nursing department is congruent with the university’s mission. Complete a thematic analysis comparing key phrases in the department’s mission with the university’s mission. Program evaluation committee Every 5 years or whenever change occurs in either statement Update document “Comparison of Departmental and University Mission.”
There is consensus among the faculty regarding the nursing mission and philosophy.
Chair Every 5 years or whenever change occurs in either statement Update document “History and Revision of Program Mission and Philosophy.”

CCNE


1-B: The mission, goals, and expected student outcomes are reviewed periodically and revised, as appropriate, to reflect: professional standards and guidelines and the needs and expectations of the community of interest.


Elaboration: There is a defined process for periodic review and revision of program mission, goals, and expected student outcomes. The review process has been implemented and resultant action reflects professional nursing standards and guidelines. The community of interest is defined by the nursing unit. The needs and expectations of the community of interest are reflected in the mission, goals, and expected student outcomes. Input from the community of interest is used to foster program improvement. The program afforded the community of interest the opportunity to submit third-party comments to CCNE, in accordance with accreditation procedures.


(See Macro environment.)

There is congruency between the nursing mission, philosophy, conceptual framework, goals, and outcomes for each program. Prepare a content map for each element to assess congruency. Curriculum committee Every 3 years Curriculum committee minutes
Expectations of the state board of nursing, NLNAC, and CCNE are known and considered in the program’s mission, goals, philosophy, and outcomes. Review state Nurse Practice Act and educational rules and NLNAC and CCNE accreditation standards and criteria. Program evaluation committee Yearly Program evaluation committee minutes  
The nursing advisory board provides meaningful input into the goals and outcomes of the program. Review mission, philosophy, conceptual framework, and goals and outcomes for each program with the nursing advisory board and seek feedback (see section titled “Environment Evaluation: Interorganizational Dimension” for additional assessment of advisory board). Chair Every 3 years Nursing advisory board meeting minutes
The goals of the program are congruent with professional standards. Compare the BSN program goals with the ANA standards of practice and essentials of baccalaureate education as defined by AACN. BSN program director    

CCNE I-A


The mission, goals, and expected student outcomes are congruent with those of the parent institution and consistent with relevant professional standards and guidelines for the preparation of nursing professionals.


I-A Elaboration: The program identifies the professional standards and guidelines it uses, including those required by CCNE and any additional program-selected guidelines. A program preparing students for specialty certification incorporates professional standards and guidelines appropriate to the specialty area. A program may select additional standards and guidelines (e.g., state regulatory requirements), as appropriate. Compliance with required and program-selected professional nursing standards and guidelines is clearly evident in the program.


III-C. The curriculum is logically structured to achieve expected individual and aggregate student outcomes. The baccalaureate curriculum builds upon the foundation of the arts, sciences, and humanities. Master’s curricula build on a foundation comparable to baccalaureate level nursing knowledge. DNP curricula build on a baccalaureate and/or master’s foundation, depending on the level of entry of the student.


Elaboration: Baccalaureate program faculty and students articulate how knowledge from courses in the arts, sciences, and humanities is incorporated into nursing practice. Postbaccalaureate entry programs in nursing incorporate the generalist knowledge common to baccalaureate nursing education as delineated in Essentials of Baccalaureate Education for Professional Nursing Practice (AACN, 2008a) as well as advanced course work.


III-B. Expected individual student learning outcomes are consistent with the roles for which the program is preparing its graduates. Curricula are developed, implemented, and revised to reflect relevant professional nursing standards and guidelines, which are clearly evident within the curriculum, expected individual student learning outcomes, and expected aggregate student outcomes.


Elaboration: Each degree program and specialty incorporates professional nursing standards and guidelines relevant to the program or area. The program clearly demonstrates where and how content, knowledge, and skills required by identified sets of standards are incorporated into the curriculum.

Documents and publications accurately reflect mission and goals. Check all publications for accuracy:
Recruitment committee Annually Recruitment committee meeting minutes


image


image


AACN, American Association of Colleges of Nursing; ANA, American Nurses Association; BSN, bachelor of science in nursing; CCNE, Commission on Collegiate Nursing Education; DNP, doctorate in nursing practice; NLNAC, National League for Nursing Accrediting Commission.



Curriculum evaluation


One of the most critical elements of program effectiveness is curriculum design. Curriculum design is an organizing framework that arranges the curriculum elements into a program of study. Curriculum design provides direction to both the content of the program and the teaching and learning processes involved in program implementation. Curriculum content involves both discipline-specific knowledge and the liberal arts foundation. Before the curriculum design can be developed, faculty must first determine their definition of the discipline of knowledge so that they may select courses that will best serve the students to prepare to practice. Faculty must determine what ways of knowing, or methods of inquiry, are characteristic of the discipline and what skills the discipline demands. Program goals and outcome statements provide a guide for the development of the program of study. The program goals link the mission and faculty belief statements (philosophy) to the curriculum design, teaching and learning methods, and outcomes. Consequently, the evaluation of the curriculum builds on the evaluation of mission and goals.




Evaluation of curriculum organization

Curriculum must be appropriately organized to move learners along a continuum from program entry to program completion. The principle of vertical organization guides both the planning and the evaluation of the curriculum. This principle provides the rationale for the sequencing of curricular content elements (Schwab, 1973). For example, nursing faculty often use depth and complexity as sequencing guides; that is, given content areas may occur in subsequent levels of the curriculum at a level of greater depth and complexity. This is supported by the work of Gagné (1977), who developed a hierarchical theory of instruction based on the premise that knowledge is acquired by proceeding from data and concepts to principles and constructs. In evaluation of the curriculum, faculty must assess for increasing depth and complexity to determine whether the sequencing was useful to learning and progressed to the desired outcomes. Determination of whether course and level objectives demonstrate sequential learning across the curriculum can be used as a test of vertical organization. The analysis can be performed with Bloom’s (1956) taxonomy as a guide for determining whether objectives follow a path of increasing complexity.


The principle of internal consistency is important to the evaluation of the curriculum. The curriculum design is a carefully conceived plan that takes its shape from what its creators believe about people and their education. The intellectual test of a curriculum design is the extent to which the elements fit together. Four elements should be congruent: objectives, subject matter taught, learning activities used, and outcomes (Doll, 1992). Evaluation efforts should include examination of the extent to which the objectives and outcomes are linked to the mission and belief statements. Program objectives should be tracked to level and course objectives. One method of assessing internal consistency is through the use of a curriculum matrix (Heinrich, Karner, Gaglione, & Lambert, 2002). The matrix is a visual representation that lists all nursing courses and shows the placement of major concepts flowing from the program philosophy and conceptual framework. Another approach to assessment of internal consistency is through a curriculum audit (Seager & Anema, 2003). Similar to a curriculum matrix, the curriculum audit provides a visual representation that matches competencies to courses and learning activities.


The principle of linear congruence, sometimes called horizontal organization, assists faculty in determining which courses should precede and follow others and which should be concurrent (Schwab, 1973). The concept of sequencing follows the principle of moderate novelty in that new information and experiences should not be presented until existing knowledge has been assimilated (Rabinowitz & Schubert, 1991). An appropriate question is: “What entry skills and knowledge does the student need as a condition of subsequent knowledge and experiences?” How faculty answer this question will determine curriculum design and implementation. The evaluation question would address the extent to which students have the entry-level skills needed to progress sequentially in the curriculum. This is a critical question in light of the changing profile of students entering college-level programs. It is often difficult to determine which prerequisite skills should be required for entry and which should be acquired concurrently. Computer skills are a good example. Students enter programs with varying ability in using computers. It is necessary to determine the prerequisite skills needed and the sequence in which advanced skills should be acquired during the program of learning.


Some nursing programs use a specific conceptual framework that identifies essential program “threads” and provides further direction to curriculum development and implementation. Congruency between program threads, program goals, course objectives, and course content will also need to be assessed. Further information on curriculum development and curriculum frameworks can be found in Unit 2.



Course evaluation

Individual courses are reviewed to determine whether they have met the tests of internal consistency, linear congruence, and vertical organization. A triangulation approach to course evaluation is useful. This approach uses data from three sources—faculty, students, and materials review—to identify strengths and areas for change (DiFlorio, Duncan, Martin, & Meddlemiss, 1989). Each course is evaluated to determine whether content elements, learning activities, evaluation measures, and learner outcomes are consistent with the objectives of the course and the obligations of the course in terms of its placement in the total curriculum.


Faculty should clearly articulate the sequential levels of each expected ability to determine what teaching and learning strategies are needed to move the student to progressive levels of ability and to establish the criteria for determining that each stage of development has been achieved. This need is important in relation not only to abilities specific to the discipline or major but also to the transferable skills acquired in the general education component of the curriculum (Loacker & Mentkowski, 1993). Some faculty achieve this by creating content maps for each major thread or pervasive strand in the curriculum with related knowledge and skill elements. The content maps chart the obligation of each course in facilitating student progression to the expected program outcome. The maps also provide a guide for the evaluation of whether the elements were incorporated as planned.


Angelo and Cross (1993) have developed a teaching goals inventory tool that is useful in individual course evaluation. The purpose is to assist faculty in identifying and clarifying their teaching goals by helping them to rank the relative importance of teaching goals in a given course. The construction of the teaching goals inventory began in 1986 and involved a complex process that included a literature review, several cycles of data collection and analysis, expert analysis, and field testing with hundreds of teachers (Angelo & Cross, 1993). In the process, Angelo and Cross developed a tool that clusters goals into higher-order thinking skills, basic academic success skills, discipline-specific knowledge and skills, liberal arts and academic values, work and career preparation, and personal development. This tool can assist the faculty in determining priorities in the selection of teaching and learning activities designed to advance the student toward the desired goals and in evaluating whether teaching goals and strategies are congruent with course objectives.



Evaluation of support courses and the liberal education foundation

Liberal education is fundamental to professional education. Expected outcomes for the liberal arts component of professional programs have received much attention in recent years (Association of American Colleges and Universities, 2002). Expected outcomes for today’s college students include effective communication skills; the use of quantitative and qualitative data in solving problems; and the ability to evaluate various types of information, work effectively within complex systems, manage change, and demonstrate judgment in the use of knowledge. In addition, students should demonstrate a commitment to civic engagement, an understanding of various cultures, and the ability to apply ethical reasoning. Nursing faculty should work collaboratively with faculty across disciplines to ensure that the general education curriculum supports the expectations of a twenty-first-century liberal education.


Evaluation questions about general education courses should address the extent to which the courses selected enable student learning and contribute to the expected outcomes. They should also be examined for sequencing to ensure that the support courses are appropriately placed to ground and complement the major and enrich the data mix for the organization and use of knowledge in practice. To develop evaluation questions related to the general education courses, faculty must first articulate what the rationale is for each course, what the expected outcomes are from the courses, and how the courses support the major to provide a broad, liberal education. When the expectations are clear, it is easier to select the measures needed to determine whether expectations have been met. Evaluation of the outcomes of the general education courses will be discussed in the section on outcomes.


External accrediting agencies have expectations about liberal education. The NLNAC (2008) states that no more than 60% of courses in associate degree curricula may be in the nursing major. The remainder of coursework should be in general education. The criteria for baccalaureate and higher-degree programs do not indicate a desired ratio. Box 28-2 provides a summary of the theoretical elements associated with curriculum evaluation.




Evaluation of teaching effectiveness


Evaluation of teaching effectiveness involves assessment of teaching strategies (including instructional materials), assessment of methods used to evaluate student performance, and assessment of student learning. Teaching strategies are effective when students are actively engaged, when strategies assist students to achieve course objectives, and when strategies provide opportunities for students to use prior knowledge in building new knowledge. Teaching effectiveness improves when teaching strategies are modified on the basis of evaluation data. See Chapter 11 for information on designing teaching strategies and student learning activities.


To demonstrate and document teaching effectiveness, faculty need multiple evaluation methods (Johnson & Ryan, 2000). Evaluation methods may include student feedback about teaching effectiveness obtained through course evaluations and focus group discussions, feedback provided through peer review, formal testing of teaching strategies, and assessment of student learning.




Student evaluation of teaching strategies

The institution or nursing department may develop course evaluations to obtain student feedback on teaching effectiveness. The advantage of internally developed evaluations is that they can be customized to the program. The primary disadvantage of internally developed tools is that they may lack reliability and validity. Standardized evaluation tools, such as those found in Individual Development and Educational Assessment (IDEA), offered by the Individual Development and Educational Assessment Center at Kansas State University, and the National Study of Student Engagement (NSSE), offered by the Indiana University Center for Postsecondary Research and Planning, have documented reliability and validity and provide opportunities to compare results among and between academic programs, departments, schools with the institutional score, and a national benchmark.


Although focus groups have been used extensively in marketing and social research, they have the potential to serve as powerful tools for program evaluation (Loriz & Foster, 2001). A focus group discussion with students can provide a qualitative assessment of teaching effectiveness. Focus groups provide an opportunity to obtain insights and to hear student perspectives that may not be discovered through formal course evaluations. The focus group leader should be an impartial individual with the skill to conduct the session. The leader should clearly state the purpose of the session, ensure confidentiality, provide clear guidelines about the type of information being sought, and explain how information will be used (Palomba & Banta, 1999). The reliability and validity of information obtained from a focus group discussion is enhanced when the approach is conducted as research with a purposeful design and careful choice of participants (Kevern & Webb, 2001).



Peer review of teaching strategies

Peer and colleague review may provide information on teaching effectiveness through classroom observation and assessment of course materials. In this context, a peer is defined as another faculty member within the same discipline with expertise in the field, and a colleague is an individual outside of the discipline with expertise in the art and science of teaching. Peer review can serve to promote quality improvement of teaching effectiveness and as documentation for performance review. Before peer review is implemented, there is a need to be clear about what data will be gathered, who will have access to the data, and for what purposes they will be used. Faculty and administrators, as stakeholders in the endeavor, should collaborate to establish the norms and standards. Data from peer review may be used prescriptively to assist faculty in developing and improving teaching skills. At some point, peer review data may be needed for performance review and administrative decision making. Some schools require both classroom visits and opportunities to observe master teachers for all new faculty and periodic classroom visits for all faculty thereafter. In some schools the observation of teaching is voluntary. The age of the classroom as the private domain of the teacher is disappearing rapidly, and both accountability and the opportunity to demonstrate the scholarship of teaching are causing colleges and universities to require increased documentation of teaching as a routine part of the evaluation process.


Although classroom observation has been used as a technique for the peer review of teaching for a number of years, the reliability and validity of this method has been suspect. The validity and reliability of classroom observation as an evaluation tool is increased by (1) including multiple visits and multiple visitors, (2) establishing clear criteria in advance of the observation, (3) ensuring that participants agree about the appropriateness and fairness of the assessment instruments and the process, and (4) preparing faculty to conduct observations (Seldin, 1980; Weimer, Kerns, & Parrett, 1988). Before classroom teaching visits are made, the students should be advised of the visit and should be assured that they are not the focus of the observation. Peer reviewers should meet with the faculty member before the visit and review the goals of the session, what has preceded and what will follow the session, planned teaching methods, assignments made for the session, and an indication of how this class fits into the total program. This provides a clear image for the visitors and establishes a beginning rapport. Some faculty have particular goals for growth that can be shared at this time as areas for careful observation and comment. Finally, a postvisit interview should be conducted to review the observation and to identify strengths and areas for growth. This may include consultation regarding strategies for growth with the scheduling of a return visit at a later date. Many visitors interview the students briefly after the visit to determine their reaction to the class and to ascertain whether this was a typical class rather than a special, staged event. Unless there is a designated visiting team, the faculty member to be visited is usually able to make selections or at least suggestions about the visitors who will make the observation. Peer visits to clinical teaching sessions should follow the same general approach as classroom visits, although specific criteria for observation will be established to meet the unique attributes of clinical teaching and learning. An additional requirement is that the visitor be familiar with clinical practice expectations in the area to be visited.



Evaluation of teaching and learning materials

The review of teaching and learning materials is another element of evaluation of teaching effectiveness that may be conducted through peer review. Materials commonly included for review are the course syllabus, textbooks and reading lists, teaching plans, teaching or learning aids, assignments, and outcome measures. In all cases, the materials are reviewed for congruence with the course objectives, appropriateness to the level of the learner, content scope and depth, clarity, organization, and evidence of usefulness in advancing students toward the goals of the course.


The syllabus is reviewed to determine whether expectations are clear and methods of evaluation are detailed. It is especially important that students understand what is required to “pass” the course. Grading scales and weighting of each of the evaluation methods used in the course should be explained.


In the review of textbooks for their appropriateness for a given course, multiple elements may be considered. The readability of a text relates to the extent to which the reading demands of the textbook match the reading abilities of the students. This assumes that the faculty member has a profile of student reading scores from preadmission testing. Readability of a textbook is usually based on measures of word difficulty and sentence complexity. Other issues of concern include the use of visual aids; cultural and sexual biases; scope and depth of content coverage; and size, cost, and accuracy of the data contained within the text (Armbruster & Anderson, 1991). Another factor of importance is the structure of the textbook. This element relates to the organization and presentation of material in a logical manner that increases the likelihood of the reader’s understanding of the content and ability to apply the content to practice. A review should determine the ratio of important and unimportant material and the extent to which important concepts are articulated, clarified, and exemplified. Do the authors relate intervening ideas to the main thesis of a chapter and clarify the relationships between and among central concepts (Armbruster & Anderson, 1991)? The ease with which information can be located in the index is important so that students can use the book as a reference. Because of the high cost of textbooks, it is useful to consider whether the textbook will be a good reference for other classes in the curriculum. A review of a textbook must also include consideration of whether the content has supported student learning. When student papers or other creative products are used for evaluation purposes, it is common to review a sample of these papers or products that the teacher has judged to be weak, average, and above average to provide a clearer view of expectations and how the students have met those expectations. This review provides an opportunity to demonstrate student outcomes. If a faculty member wants to retain copies of student papers and creative works to demonstrate outcomes, he or she should obtain informed consent from the students. Accrediting bodies often wish to see samples of student work, and faculty may use them to demonstrate learning outcomes for purposes of their own evaluation. Each student’s identity should be protected and consent should be obtained.


The review of teaching and learning aids depends on the organization and use of these materials. The organization may be highly structured in that all are expected to use certain materials in certain situations or sequences, or materials may be resources available to faculty and students for use at their discretion according to the outcomes they wish to achieve (Rogers, 1983). Students may be expected to search for and locate materials, to create materials to facilitate their learning, or simply to use the materials provided in a prescribed manner. The emphasis will determine whether evaluation questions related to materials are based on variety, creativity, and availability or whether the materials have been used as intended. Regardless of the overall emphasis, teaching and learning materials should be evaluated for efficiency and cost-effectiveness. Efficiency can be evaluated by determining whether the time demands and effort required to use the materials are worth the outcomes achieved. Cost-effectiveness can be determined by considering whether the costs of the materials justify the outcomes.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Feb 12, 2017 | Posted by in NURSING | Comments Off on Educational program evaluation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access