Program Evaluation


27515


PROGRAM EVALUATION


KAREN J. SAEWERT


INTRODUCTION


This chapter provides basic knowledge about the process, principles, and steps of program evaluation and their application to healthcare practice, education, and research. Meaningful use of program evaluation to healthcare remains essential as economic resources for new clinical programs shrink and the viability and impact of existing programs are challenged. Program evaluation serves many purposes including program improvement; accountability and decision-making; judgments of merit, worth, and significance; and ultimately social welfare promotion and measurement of success, relevance, and sustainability (Ardisson et al., 2015; Gargani & Miller, 2016). Program evaluation studies use various frameworks and methods, but commonly share goals to analyze new or existing programs within a specific social context, produce information for evaluating the program’s effectiveness, and use information to make decisions about program refinement, revision, and/or continuation.


OVERVIEW AND PRINCIPLES OF PROGRAM EVALUATION


Program evaluation is defined as the systematic collection, analysis, and reporting of descriptive and judgmental information about the merit and worth of a program’s goals, design, process, and outcomes in an effort to address improvement, accountability, and understanding of the phenomenon (Posavac & Carey, 2010; Stufflebeam & Shinkfield, 2007). As a broad concept, program evaluation includes a range of approaches (e.g., formative, summative) with similarities to continuous quality improvement viewed by some within a continuum of approaches to support organizations and program delivery, yet distinguished by the ultimate goal of determining program merit and worth (Donnelly et al., 2016). A distinctive feature of program evaluation is that it examines programs—a set of specific activities designed for an intended purpose that has quantitatively and/or qualitatively measurable goals and objectives. Because programs come in a variety of shapes and sizes, the models and methods for evaluating them are also varied (Stufflebeam & Shinkfield, 2007).


There is no clear consensus or agreement about the use of any one model in program evaluation. The choice of an approach to program evaluation is not so much a hunt for the perfect model as it is a reflective exercise through which the evaluator recognizes a model’s inherent biases and decides on an appropriate combination of available approaches to supplement and ensure that all of the relevant elements of the evaluation are captured (Haji et al., 2013). Understanding the program that is being evaluated is an overriding principle in model selection. This encompasses the program context and history, the purpose of undertaking a program evaluation, and the time, expertise, and resources required for conducting a program evaluation (Billings, 2000; 276Hackbarth & Gall, 2005; Posavac & Carey, 2010; Shadish et al., 1991; Spaulding, 2008; Stufflebeam & Shinkfield, 2007). Selection of an evaluation model should also consider the needs and interests of, and yield the most useful and organized information for, various stakeholders (Hackbarth & Gall, 2005). Table 15.1 provides an overview of eight program evaluation models.







































TABLE 15.1 Selected Overview of Program Evaluation Models


MODEL


DESCRIPTION AND CONSIDERATIONS


Objective-based


• The predominant model used for program evaluation. Uses objectives written by the creators of the program and the evaluator. The emphasis of this approach is on the stated program goals and objectives that guide the evaluation data to be collected.



• Focus on the goals and objectives should not neglect an examination of reasons the program succeeds or fails, additional desirable or untoward program effects, or whether the selected goals and objectives were best-suited for the key stakeholder audiences.


Goal-free


• Assumes that evaluators work more effectively if they do not know the goals of the program. Considerable effort is spent studying the program as administered (e.g., staff, clients, setting, records, etc.) to identify all program impacts (positive and negative). Program staff and funders decide whether evaluation findings demonstrate that the program meets the needs of the clients.


• An expensive approach. Its open-ended nature may be perceived as threatening. Problematic for projects that receive funding and are required to conduct data collection and analysis for specific outcomes based on goals and objectives, that if excluded, may not be considered.


Expert-oriented


• Focus on the evaluator as a content expert who carefully examines a program to render a judgment about its quality. The evaluator judges a program or service on the basis of an established set of criteria as well as their own expertise. Decisions are based on quantitative as well as qualitative data.



• This approach is frequently used when the entity being evaluated is large, complex, and unique. Involves program evaluator sent to the site by agencies that grant accreditation to institutions, programs, or services. Issues may include the specificity of criteria, interpretation of criteria by various experts, and the level of content expertise of the evaluator.


Naturalistic


• The evaluator becomes the data gatherer using a variety of direct observation and qualitative techniques to develop a deep, rich, and thorough understanding of the program (e.g., clients, social environment, setting, etc.). Direct observations by the evaluator are thought to be an advantage to construct the meaning of numerical information.


• This approach generates lengthy reports because of the detail included.


Participative-oriented


• The evaluator invites stakeholders to participate actively in the program evaluation and gain skills from the experience (e.g., instrument development, data analysis, report findings).


• This approach requires close contact with the stakeholders. Benefits may include a potential increase in the likelihood of stakeholders enacting recommendations and subsequently reducing the amount of time the process for improvement may take. Some argue that this approach compromises the validity of the evaluation.


277Improvement-focused


• Assumes an explicit assumption that program improvement is the focus of the evaluation. The evaluator helps program staff discover discrepancies between program objectives and the needs of the target population, between program implementation and program plans, and between expectations of the target population and the services actually delivered.


• This approach tends to lead to an integrated understanding of the program and its effects. Quantitative and qualitative data are used to identify strengths of the program (merit and worth) and, conversely, the ways that the program may fall short of its goals and benefit from improvement.


Success case


• Detailed information is obtained from those who benefit most from the program.


• This approach can lead program managers to tailor programs to those most likely to succeed rather than those most in need of the program when naively applied.


Theory-driven


• The evaluation is based on a careful description of the services to be offered in the program to participants, the way the program is expected to change the participants, and the specific outcomes to be achieved. Analysis consists of discovering the relationships among the participant characteristics and services, services and the immediate changes, and the immediate changes and outcome variables.


• Qualitative understanding of the program that requires resources and expertise not available or funded may go unaddressed.


Standards of Program Evaluation


Program evaluations are expected to meet specific standards based on five fundamental concepts: utility, feasibility, propriety, accuracy, and accountability. Concepts and related standards of the original four categories of evaluation quality recommended by The Joint Committee on Standards for Educational Evaluation are central to the Centers for Disease Control and Prevention (CDC) framework (Stufflebeam & Shinkfield, 2007).


In brief, utility refers to the usefulness of an evaluation for those persons or groups involved with or responsible for implementing the program. Evaluators should ascertain the users’ information needs and report the findings in a clear, concise, and timely manner. The general underlying principle of utility is that program evaluations should effectively address the information needs of clients and other audiences with a right to know and inform program improvement processes. If there is no prospect that the findings of a contemplated evaluation will be used, the evaluation should not be done.


Program evaluation should employ procedures that are feasible, parsimonious, and operable in the program’s environment without disrupting or impairing the program. Feasibility also addresses the control of political forces that may impede or corrupt the evaluation. Feasibility standards require evaluations to be realistic, prudent, diplomatic, politically viable, frugal, and cost-effective.


Evaluations should meet conditions of propriety. They should be grounded in clear, written agreements that define the obligations of the evaluator and program client with regard to supporting and executing the evaluation and protecting the rights and dignity of all involved. In general, the propriety standards require that evaluations be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation and those affected by the results.


278Accuracy includes standards that require evaluators to describe the program as it was planned and executed, present the program background and setting, and report valid and reliable findings. This fundamental concept and related standards require that evaluators obtain sound information, analyze it correctly, report justifiable conclusions, and note any pertinent caveats (Stufflebeam & Shinkfield, 2007).


Accountability includes standards that encourage adequate documentation and an internal and external meta-evaluative (evaluation of the evaluation) perspective focused on improvement and accountability for evaluation processes and products (Yarbrough et al., 2011).


Guiding Principles


In addition to these fundamental concepts and related standards of evaluation, the American Evaluation Association has set forth guiding principles for evaluators intended to guide the professional practice of evaluators and to inform evaluation clients and the general public about the principles they can expect to be upheld by professional evaluators (American Evaluation Association, n.d.). These principles—systematic inquiry, competence, integrity and honesty, respect for people, and responsibilities for general and public welfare—focus on the following areas and are fully detailed at the Association’s website (www.eval.org/p/cm/ld/fid=51).


FORMATIVE AND SUMMATIVE EVALUATIONS


Formative and summative evaluations are common components of program evaluation. Experts note that the role of formative evaluation is to assist in developing and implementing programs, whereas summative evaluation is used to judge the value of the program. It is not the nature of the collected data that determines whether an evaluation is formative or summative but the purpose for which the data are used (Stufflebeam & Shinkfield, 2007). Data for summative and formative evaluations can be qualitative and/or quantitative in nature; the former is a nonnumerical (e.g., narrative and observation) approach (quality), whereas the latter is a numerical or statistical approach (quantity).


Formative Evaluation


Formative evaluations are used to assess, monitor, and report on the development and progress of implementing a program (Stetler et al., 2006; Stufflebeam & Shinkfield, 2007; Wyatt et al., 2008). This type of evaluation is directed at continuously improving operations and offers guidance to those who are responsible for ensuring the program’s quality. A well-planned and executed formative evaluation helps ensure that the purpose of the program is well defined, its goals are realistic, and its variables of interest are measurable. In addition, a formative evaluation may focus on the proper training of staff who will be involved in the program implementation. During this evaluation phase, data are collected that serve to monitor the project’s activities. The evaluator should interact closely with program staff, and the evaluation plan needs to be flexible and responsive to the development and implementation of the program.


Summative Evaluation


In contrast, a summative evaluation focuses on measuring the general effectiveness or success of the program by examining its outcomes. A summative evaluation addresses whether the program 279reached its intended goals, upheld its purpose, and produced unanticipated outcomes; it may also compare the effectiveness of the program with that of other, similar interventions (Posavac & Carey, 2010). This type of evaluation is meant to assess a program at its completion. Summative evaluation might be used to compare the effectiveness of the different treatment programs if more than one is implemented, or to make comparisons among members of the “treatment” group (e.g., those enrolled in a pulmonary rehabilitation program) and a natural comparison group (e.g., those not enrolled in a pulmonary rehabilitation program). Longitudinal comparisons may also be examined to determine the relative influence of the program at different stages. In other words, the summative evaluation seeks to determine the long-term and lasting effects on clients of having participated in the program (Posavac & Carey, 2010; Stufflebeam & Shinkfield, 2007).


USE OF PROCESS AND OUTCOME DATA


Formative and summative evaluations can be further understood in terms of process (program implementation and progress) and outcome (program success). Process refers to how the program is run or how the program reaches its desired results. A formative evaluation is process focused and requires a detailed description of the operating structure required for a successful program. Outcome refers to the success of a program and the effects, including, but not limited to, cost and quality. Outcomes can be individualized to show the effects and determine the impact of the program on each program participant. Outcomes can also be program based to examine the success and determine the impact of the program on an organizational level. Program-based outcomes are often analyzed in terms of their fiscal impact or success through a comparison with similar programs.


USE OF QUALITATIVE AND QUANTITATIVE DATA


Both qualitative and quantitative data are useful for program evaluation. Qualitative data, with a rich and narrative quality, provide an understanding of the impact of the program on individuals enrolled in the program. The descriptive nature of qualitative data allows one to understand the operating structure of a program and the individualized outcomes of a program. On the other hand, quantitative data, with their strictly numerical nature, allow a mathematical understanding of the factors involved in a program (e.g., statistical significance and power analysis). In addition, quantitative data provide descriptive analysis (e.g., frequency counts, means or averages) of the variables of interest. The statistical nature of quantitative data facilitates understanding of overall programmatic outcomes and makes possible direct comparisons among program participants and, if applicable, between program groups. Frequently, only one of these data types is used, neglecting the often beneficial and complementary provisions of the other. For qualitative data, direct observation and description are emphasized, as these lead to a form of discovery or an understanding of individual level impact of program factors. Qualitative methods may also provide insight into the context in which the program is delivered. Quantitative data tend to rely on standardized instrumentation and variable control and provide numerical figures that depict level of program success.


Triangulation is one way that both qualitative and quantitative data can be incorporated into a program evaluation to enhance the validity of program evaluation findings. Denzin (1978) described four forms of triangulation: data, investigator, theory, and methodology focused on a technological solution for ensuring validity (Mathison, 1988). These forms, applied in the context of program evaluation, are outlined in the following:



  1. 280Data triangulation: Use of multiple data sources to conduct program evaluation that may include time and setting variations.
  2. Investigator triangulation: Involvement of more than one evaluator in the program evaluation process.
  3. Theory triangulation: Use of various perspectives to interpret evaluation results.
  4. Methodological triangulation: Application of different methods to different understandings related to program evaluation.

Mathison (1988) proposed an alternative conceptualization of triangulation strategies useful to consider: convergence, inconsistency, and contradiction. This alternative perspective takes into account that triangulation results in convergent, inconsistent, and contradictory evidence that must be rendered sensible by the evaluator. This belief shifts the responsibility for constructing and making sense of program evaluation findings to the program evaluator and suggests that triangulation as a strategy provides evidence for the evaluator to consider, but the triangulation strategy does not, in and of itself, do this. This viewpoint, applied to program evaluation, is extrapolated as follows:



  1. Convergence: Occurs when program evaluation data collected from different sources, investigators, perspectives, and/or methods agree.
  2. Inconsistency: Occurs when program evaluation data collected from different sources, investigators, perspectives, and/or methods are inconsistent but not confirmatory or contradictory.
  3. Contradiction: Occurs when program evaluation data collected from different sources, investigators, perspectives, and/or methods are not simply inconsistent, but contradictory.

Triangulation strategies explicate existing, often unarticulated problems realistically—rarely in agreement and frequently inconsistent and/or contradictory—challenging evaluators to make sense of evaluation finds within a holistic context and understanding (Mathison, 1988).


STEPS IN PROGRAM EVALUATION


Overview


Program evaluation should not focus solely on proving whether a program or initiative works. Historically, emphasis on the positivist scientific approach and on proving that programs work has created an imbalance in human service evaluation work—with a heavy emphasis on proving that programs work through the use of quantitative, impact designs and not enough attention to more naturalistic, qualitative designs aimed at improving programs (W. K. Kellogg Foundation, 2017). Program evaluation should consider a more pluralistic approach that includes a variety of perspectives. Questions to consider include:


Does the program work? Why does it work or not work?


What factors impact the implementation and effectiveness of the program?


What are program strengths?


What opportunities exist for program improvement?


281Internal and External Evaluators


Evaluators can relate to an organization seeking program evaluation in two primary ways: internally or externally. Internal evaluators work for the organization seeking program evaluation and may do a variety of evaluations in that setting on an episodic or ongoing basis. External evaluators may be independent consultants or work for a research firm, university, or a government agency contracted to conduct a specific program evaluation. An evaluator’s affiliation has implications for program evaluations and should be considered along with competence, personal qualities, and the purpose of the evaluation (Posavac & Carey, 2010).


Competence factors include methodological and knowledge expertise needed to conduct the program evaluation. Internal evaluators often have knowledge and access advantages consistent with an internal alliance with the program, its participants, and staff. The methodological expertise of an evaluator must also be considered; this includes the extent to which an evaluator has access to resources and individuals to bridge any knowledge and/or methodological disparities. Although not absolute, external evaluators frequently have access to a wider range of resources and methodological experts than are often available to an internal evaluator. Selecting an evaluator with the program content expertise and experience may enhance the evaluator’s insight into crucial issues; conversely, an evaluator with limited expertise and experience may contribute avoidable interpretive errors.


A program evaluator’s trustworthiness, objectivity, sensitivity, and commitment to program improvement are critical to a program evaluation effort. The perception of these attributes by others may vary depending on whether the evaluator is internal or external to the organization. For example, an internal evaluator might be expected to have a higher degree of commitment to improving the program and may be readily trusted by program participants, staff, and administrators. The internal evaluator’s institutional credibility should be anticipated to influence their ability to conduct the evaluation. In contrast, an external evaluator may be perceived as more objective and may find it easier to elicit sensitive information. Developing a reputation for tackling sensitive issues is often made easier when evaluators consistently emphasize that the majority of program improvement opportunities are associated with system issues versus the performance of individuals. Regardless of internal or external organizational affiliation, individual qualities remain an important consideration in selecting a program evaluator.


Finally, the purpose of the evaluation can provide additional guidance to those charged with making an evaluator selection decision. The internal evaluator may have the advantage in performing formative evaluations and leveraging existing relationships with program participants, staff, and administrators and maximize the effectiveness of communication and adoption of program improvement recommendations. In contrast, if the primary purpose of the evaluation is summative in nature and is intended to decide whether a program is continued, expanded, or discontinued, an external evaluator may be a preferable choice (Posavac & Carey, 2010).


Initial Communication


When an evaluation is being conducted it is imperative that communication between the evaluator and program representatives be clear and agreements and expectations made explicit. Evaluators must acknowledge, be flexible, and be willing to accommodate the competing demands of program administrators. Agreements, if written, advisably serve as both a reminder and record of decision-making. Seeing evolving evaluation plans described in writing can draw attention to implications that neither evaluators nor administrators have previously considered.


282 PLANNING AND CONDUCTING THE EVALUATION: ESSENTIAL STEPS


Ideally, program evaluation should begin when programs are planned and implemented. Formative evaluation, described earlier in this chapter, is a technique often used during program planning and implementation. However, some programs may already be underway and may never have undergone a formal evaluation process. Evaluation of such programs requires that the evaluator understand the program and how it is being implemented as part of the evaluation process. The Centers for Disease Control and Prevention (CDC) framework for evaluation (see Figure 15.1) will be used as the guide in describing the program evaluation steps (CDC, 2012).



FIGURE 15.1Recommended framework for program evaluation.


Source: From Centers for Disease Control and Prevention. (2012). A framework for program evaluation. https://www.cdc.gov/mmwr/PDF/rr/rr4811.pdf


283Step 1: Engaging Stakeholders


The evaluation cycle begins by engaging stakeholders—the persons or organizations that have an investment in what will be learned from an evaluation and what will be done with the knowledge. Stakeholders include program staff, those who derive some of their revenue or income from the program (e.g., program administrators), sponsors of the program (e.g., CEO of an organization, foundations, and government agencies), and clients or potential participants in the program. Understanding the needs of intended program recipients is necessary because it is for their welfare that the program has been developed. This may require undertaking a needs assessment and gathering information on the demographics and health status indicators of the target populations. These data may reside in existing data sources (e.g., health statistics from local or state health departments) or may be collected from key informants through surveys, focus groups, or observations (Hackbarth & Gall, 2005; Laryea et al., 1999). Stakeholders must be engaged in the inquiry to ensure that their perspectives are understood. When stakeholders are not engaged, an evaluation may fail to address important elements of a program’s objectives, operations, and outcomes (Hackbarth & Gall, 2005; Posavac & Carey, 2010).


Step 2: Describing the Program


Program descriptions convey the mission and objectives of the program being evaluated. Descriptions should be sufficiently detailed to ensure understanding of (a) program goals and strategies, (b) the program’s capacity to effect change, (c) its stage of development, and (d) how it fits into the larger organization and community. Program descriptions set the frame of reference for all subsequent decisions in an evaluation. The description enables the evaluator to compare the program with similar programs and facilitates attempts to connect program components to their effects. Moreover, different stakeholders may have different ideas regarding the program’s goals and purposes. Working with stakeholders to formulate a clear and logical program description will bring benefits even before data are available to evaluate the program’s effectiveness. Aspects to include in a program description are need (nature and magnitude of the problem or opportunity addressed by the program, target populations, and changing needs); expected effects (what the program must accomplish to be considered successful, immediate and long-term effects, and potential unintended consequences); activities (what activities the program undertakes to effect change, how these activities are related, and who does them); resources (time, talent, technology, equipment, information, money, and other assets available to conduct program activities; congruence between desired activities and resources); stage of development (newly implemented or mature); and context (setting and environmental influences within which the program operates). An understanding of environmental influences such as the program’s history, the politics involved, and the social and economic conditions within which the program operates is required to design a context-sensitive evaluation and to aid in interpreting findings accurately (Barkauskas et al., 2004; Jacobson Vann, 2006; Menix, 2007).


Questions to facilitate program descriptions include: (a) Who wants the evaluation? (b) What is the focus of the evaluation? (c) Why is the evaluation wanted? (d) When is the evaluation needed? and (e) What resources are available to support the evaluation? Addressing these questions will assist in helping individuals understand the goals of the program evaluation, arrive at an overall consensus on the purpose of evaluations, and determine the time and resources available to carry out the evaluation. These questions also assist in uncovering the assumptions and conceptual basis of the program. For example, diabetes care programs may be based on the 284chronic care model, a disease management model, or a health belief model, and it is important that the evaluator understand the program’s conceptual basis to understand essential information to include in the program evaluation (Berg & Wadhwa, 2007).


Development of a logic model is part of the work of describing the program. A logic model sequences the events for bringing about change by synthesizing the main program elements into a picture of how the program is supposed to work. Often, this model is displayed in a flowchart, map, or table to portray the sequence of steps that will lead to the desired results. One of the virtues of a logic model is its ability to summarize the program’s overall mechanism of change by linking processes (e.g., exercise) to eventual effects (e.g., improved quality of life, decreased coronary risk). A logic model can also display the infrastructure needed to support program operations. Elements that are connected within a logic model generally include inputs (e.g., trained staff, exercise equipment, space); activities (e.g., supervised exercise three times per week, education about exercise at home); outputs (e.g., increased distance walked); and results, whether immediate (e.g., decreased dyspnea with activities of daily living), intermediate (e.g., ability to participate in desired activities of life, improved social interactions), or long term (e.g., improved quality of life). Creating a logic model allows stakeholders to clarify a program’s strategies and reveal assumptions about the conditions necessary for the program to be effective. The accuracy of a program description can be confirmed by consulting with diverse stakeholders and comparing reported program descriptions with direct observation of the program activities (CDC Evaluation Working Group, 2008; Dykeman et al., 2003; Ganley & Ward, 2001; Hulton, 2007).


Step 3: Focusing the Evaluation Plan


On the basis of the information gained in steps 1 and 2, the evaluator needs to set forth a focused evaluation plan (Table 15.2). A systematic and comprehensive plan anticipates the program’s intended uses and creates an evaluation strategy with the greatest chance of being useful, feasible, ethical, and accurate. Although the components of the plan may differ somewhat depending on the information and understanding of the program gained in steps 1 and 2, essential elements of the evaluation plan are discussed here.


Purpose


Articulating an evaluation’s purpose (i.e., intent) prevents premature decision-making regarding how the evaluation should be conducted. Characteristics of the program, particularly its stage of development and context, influence the evaluation’s purpose. The purpose may include gaining insight into program operations that affect program outcomes so that knowledge can be put to use in designing future program modifications; describing program processes and outcomes for the purpose of improving the quality, effectiveness, or efficiency of the program; and assessing the program’s effects by examining the relationships between program activities and observed consequences. It is essential that an evaluation purpose be set forth and agreed on, as this will guide the types and sources of information to be collected and analyzed.


Selecting and Defining Variables of Interest


Defining the independent (process and context) and dependent (outcome) variables to be measured carefully and accurately is an essential part of focusing the evaluation plan. This step is likely to be the most daunting, as well as the most important. Selection and definition of the variables must be precise enough so as not to generate unwieldy data management and analysis, yet retain variables that are both meaningful and measurable. During formative evaluations, process variables are of primary interest, whereas during summative evaluations both process and outcome variables are of interest.






























285TABLE 15.2 Examples of Steps 3, 4, and 5 for Formative and Summative Evaluations


COMPONENTS


FORMATIVE EVALUATION


SUMMATIVE EVALUATION


Selecting and defining variables of interest


Focus on process variables that determine how well the program is running.


• Are clients being recruited?


• Has staff been properly trained in the program protocol?


• Are staff members following program protocol?


• Are clients adhering to program requirements?


Focus on outcome variables that determine the effectiveness of the program.


• Were clients and staff satisfied with the program?


• Did client health improve significantly?


• Were medical resources reduced as a result of the program?


• Are clients continuing the program on their own once the program is completed?


Measuring variables of interest


Focus on ways to measure how well the program is being run. Use:


• Focus groups to discuss problem areas and ways to improve the program.


• Direct observation to measure how the staff is following the protocol.


• Staff diary data to measure problems that occur on a daily basis.


• Interviews of clients to determine how well they are adhering to the program requirements.


Focus on ways to measure the impact the program has had. Use:


• Self-reports of clients and staff to report on the progress of clients in terms of health status, functioning, and symptomatology.


• Collateral reports to get a second rating on the clients’ improvements.


• Biomedical data to determine changes in biological parameters of functioning.


• Chart abstractions to measure healthcare resource use.


Selecting a program evaluation design


Use descriptive designs or narrative accounts.


• Allow for a narrative account of how well the program is being run.


• Provide feedback from clients and staff on areas in need of improvement.


• Document client adherence levels to the program requirements.


• Track process variables over the implementation of the program.


Use experimental, quasi-experimental, or sequential designs (when possible).


• Allow for a comparison among groups of clients that were assigned to groups that received the program or did not receive the program.


• Use random assignment to program groups whenever possible.


• Use over time examinations, if resources permit.


• Allow for determination of impact that the program has had on clients’ lives.


286Collecting data


Use uniform collection procedures that do not disrupt the program implementation.


• Collected data must be coded according to a uniform system that translates narrative data into meaningful groupings. For example, focus group comments can be grouped into comments about staff-related problems, patient-related problems, recruitment difficulties, adherence difficulties, and so on.


• Program evaluators should not bias data collection strategies by holding preconceptions about how well the program is being run.


Use systematic procedures for collecting data across groups (if more than one) and across time.


• Collected data must be coded with a uniform system that translates the data into numerical values so that data analysis can be conducted. For example, responses to a question about health status that includes responses such as poor, fair, good, and excellent need to be coded as 0, 1, 2, or 3.


• Across-time data collection must follow the same procedures. For example, all patients receive self-reports either in the mail or from the program site. The procedures must not vary.


Evaluating data analysis


Use both qualitative and quantitative approaches.


• Qualitative approaches provide a narrative description of the process variables and allow for descriptive understanding of how well the program is running.


• Quantitative approaches provide frequency counts and means or averages for some of the variables of interest.


• Allows evaluators to make recommendations based on data.


Use both qualitative and quantitative approaches.


• Qualitative approaches provide a narrative description of the impact that the program has had on individual clients.


• Quantitative approaches provide a statistical comparison between groups (if more than one) or across time. Can determine whether the program was effective in improving clients’ health status, increasing staff and client satisfaction, and reducing healthcare resources utilized.


• Quantitative approaches can also be used to make comparisons with other similar clinical programs.


• Enables evaluators to make recommendations based on data.


Source: From Centers for Disease Control and Prevention. (1999). Framework for program evaluation. Morbidity and Mortality Weekly Report, 48(RR 11). https://www.cdc.gov/mmwr/PDF/rr/rr4811.pdf


287Independent or Process Variables


An example of an important independent or process variable is the level of adherence to any treatments or self-care regimens prescribed by a program. More than 25 years of research indicates that, on average, 40% of clients fail to adhere to the recommendations prescribed to them to treat their acute or chronic conditions (DiMatteo & DiNicola, 1982). How well participants adhere to program requirements, a focus of formative evaluation, is intimately linked to a program’s overall effectiveness. Nonadherence has been found to be a causal factor in the time and money wasted in healthcare visits (Haynes et al., 1979) and must not be overlooked in determining how well a program is being implemented. As an illustration, if a program introduces barriers to adherence (e.g., by requiring time-intensive self-care routines, by introducing complex treatments with numerous factors to remember, by making it difficult to get questions answered, or by having uninformed or untrained staff), program client adherence will likely be diminished. An accompaniment to adherence is the issue of how well the program staff maintains or adheres to the program’s protocol. The integrity of an intervention or program is not upheld unless the staff members assigned to carry it out are diligent in following procedures and protocol (Kirchhoff & Dille, 1994). In addition to assessing adherence, program evaluators need to ascertain the level at which staff members are adhering to the program protocols.


Process variables include how participants are recruited into the program, how well trained and informed the staff members are about the program’s purpose and importance, how staff members identified barriers (if any) to the implementation of the program, the level at which the program site is conducive to conducting a well-run program, and the perceptions held by program staff and participants related to the usefulness of the program. These factors generally are easier to realign than are issues of participant and staff adherence. For this reason, evaluators need to spend a considerable amount of time formatively assessing and adjusting program procedures and protocols to maximize participant and staff adherence.


All programs are situated within a community or organization. This context exerts some degree of influence on how the program works and on its effectiveness. The need to examine which contextual factors have the greatest impact on program success and which factors may help or hinder the optimization of the program’s goals and objectives is likely to arise in program evaluation. Variables such as leadership style, cultural competence, organizational culture, and collaboration are all examples of contextual factors that the evaluator should consider. Gathering this type of information through either quantitative or qualitative techniques will help the evaluator understand why components of the program worked or did not work (Greenhalgh et al., 2005; Stetler et al., 2008). Other areas worthy of examination include the federal and state climates, the impact of these climates on program processes and effectiveness, and how these climates have changed over time (Randell & Delekto, 1999).


Dependent or Outcome Variables


Examples of dependent or outcome variables (Fitzgerald & Illback, 1993) that are measured by social scientists and healthcare services evaluators in determining the effectiveness of healthcare programs include:


288Participant health status and daily functioning.


Client satisfaction with program providers and care received.


Program provider satisfaction.


Cost containment.


Evaluators and nurses alike should consider each of these variables as outcomes of a clinical program. The evaluator who conducts a program evaluation and who attempts to determine the effectiveness or success of a program should pay particular attention to these four outcomes.


The first important outcome variable defines and measures whether the program has facilitated the client’s ability to improve his or her health and/or functional status. The outcome of importance is whether the program has improved quality of life and whether health goals have been achieved. If the program does not increase these outcome variables and the protocol or intervention has been followed (i.e., client and staff have adhered to the program), then the program’s effectiveness is questionable. Although expectations for health improvements may not be a focus of the program, client health status is an important outcome variable that needs to be examined. For this reason, health status measurements taken multiple times and in multiple ways using triangulation techniques over the course of the program and even after the program is completed must be considered.


An important outcome of the healthcare delivery and program evaluation is client satisfaction. Research suggests that an intervention that decreases satisfaction with healthcare may lead to poorer health (Kaplan et al., 1989), poorer adherence to treatments (Ong et al., 1995), poorer attendance at follow-up appointments (DiMatteo et al., 1986), and greater interest in obtaining healthcare elsewhere (Ross & Duff, 1982) than is the case among those whose satisfaction has increased. Evaluators need to take into account changes in client satisfaction with the program in particular, and with their healthcare in general, because any decrease in satisfaction can point to problems in the program’s purpose, scope, and execution.


Another outcome that is often overlooked is that of staff satisfaction. Slevin et al. (1996) measured staff satisfaction during the evaluation of a quality improvement initiative and found that satisfaction was related to better interpersonal care of clients. Level of satisfaction can pertain directly to the process and implementation of the intervention or can be more generally defined and include professional satisfaction. Any program that introduces frustrations for its staff may risk contributing to it being conducted in the manner other than intended, serve to diminish the quality of care delivered, and perhaps negatively influence client satisfaction and health status. The satisfaction of program staff who implement the intervention on a daily basis and interact and negotiate with clients must be addressed. Evaluation of programs must attend to the impact that the program has on the staff involved, and not simply the impact that it has on the client (Slevin et al., 1996).


Finally, of considerable importance to program evaluation is the outcome variable of cost containment and/or reduction. An effective program is one that improves the quality and delivery of care outcomes, while maintaining and perhaps even reducing costs to both the organization and the client. This evaluation outcome, however, is generally long-term in nature and requires multiple follow-ups; this can pose a considerable burden for programs with limited resources. Data that may be available to assist in this aspect of outcome evaluation include information about any program clients’ hospitalizations and related lengths of stay, emergency department visits, regular healthcare provider office visits, supplies and equipment costs, and personnel time. It is beneficial for program evaluators to work collaboratively with financial management personnel to obtain this important information.


289Measuring Variables


The next step is to select the way in which each variable of interest will be measured. In making this decision, it is important to consider, first, the many ways in which variables can be assessed and measured (e.g., self-reports, biomedical instrumentation, direct observation, or chart abstraction) and, second, the source from which the data will be collected (e.g., program client, program staff, healthcare records, or other written documents). Measurement is an important element of program evaluation, for without rigorous, reliable, and valid information, the data obtained and subsequent recommendations are questionable.


Program evaluators need to consider, if possible, the use of valid and reliable instruments, rather than develop new instruments to measure the variable of interest. Exhibit 15.1 outlines potential benefits of this approach. Many forms of instrumentation exist that have been used for purposes of program evaluation.


Several reference books are available that have compiled a multitude of research instruments and normative data for measures (Frank-Stromberg & Olsen, 2004; Robinson et al., 1991; Stewart & Shamdasani, 1990).


Fitzgerald and Illback (1993) delineate the various methods of obtaining information and corresponding data sources to consider in program evaluation. Data sources and collection methods to consider are outlined in Exhibit 15.2. Self-reports from program clients and staff are likely the most widely used technique for acquiring information about the process and effects of a program intervention. These measures, completed by program participants and staff, can often be completed at the individual’s leisure. The main advantages of self-reports include ease of use, cost-efficiency, limited coding and data-entry requirements, and reduced need for highly trained staff to implement their use. The main disadvantage is the prevalent belief that self-report instruments elicit self-presentation tendencies (i.e., the tendency of individuals to present themselves in a socially desirable manner or in a positive light). This view is often unfounded, as many measurement experts now hold the view that most of the people, most of the time, are accurate in their self-reported responses (Stewart & Ware, 1992; Ware et al., 1978).


In addition to self-report inventories completed by the program participant, collateral reports can be obtained. Collateral reports are completed by an individual closely related to the program participant. These reports rely on the same instruments as those used for self-reports, with slight modifications in wording, and can provide additional information about the program client. These types of reports have not been used to a great extent in nursing research, though they have been used extensively in psychological research. Collateral measures have been found to be highly correlated with the self-report data and can serve as either a validity check on the self-report data or an additional source of variant information to be used in the program evaluation.


Stay updated, free articles. Join our Telegram channel

Oct 17, 2021 | Posted by in NURSING | Comments Off on Program Evaluation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access