13: Process Evaluation


CHAPTER 13
Process Evaluation


After ascertaining their feasibility, newly designed health interventions are evaluated for their efficacy or effectiveness in addressing the health problem and in improving functioning and health. These evaluation studies usually focus on the examination of the ultimate outcomes expected of the intervention (detailed in Chapter 14). The focus on the ultimate outcomes presents challenges in the interpretation of the studies’ findings, whether the findings do or do not support the effectiveness of the intervention under evaluation. The interpretation becomes more challenging when the results of outcome evaluation studies vary, whereby those of some studies indicate the health intervention is effective and those of other studies of the same intervention show that it is ineffective (Grant et al., 2013). The challenge in the interpretation of the results of an individual study or of several studies is related to the plausibility of multiple possible explanations of the results (Pawson & Manzano‐Santaella, 2012). Foremost among possible explanations are the adequacy of the research methods used in an individual outcome evaluation study and the variability in the research methods used in different studies. Adequate methods (e.g. research design, sample size, outcome measures) minimize biases that threaten the validity of inferences. Additional explanations are considered. The explanations are associated with factors that include, but are not limited to: the adequacy of the intervention theory; the appropriateness of the intervention delivery; the participants’ perceptions and responsiveness to the intervention; contextual factors influencing the delivery of the intervention by interventionists and application of treatment recommendations by participants; and capacity and/or potency of the intervention in initiating the mechanism of action responsible for its effects on the ultimate outcomes (e.g. Brand et al., 2019; Siddiqui et al., 2018; Van de Glind et al., 2017).


Rather than speculating about possible explanations of the results, it would be logical and informative to gather data pertaining to the associated factors advanced as additional explanations. These data provide evidence that indicates what exactly contributes to the ultimate outcomes observed in an evaluation study. The evidence clarifies why a successful intervention “works” or is effective, and what leads to lack of the intervention’s success; specifically, is the lack of success due to failure of the intervention theory or the inappropriate delivery of the intervention (Aust et al., 2010; Bakker et al., 2015; Benson et al., 2018; Craig et al., 2008; Foley et al., 2016; Van de Glind et al., 2017). Process evaluation is advocated as a means to obtain this evidence when evaluating the effectiveness of health interventions (Medical Research Council, [MRC] UK, 2019).


In this chapter, the importance of process evaluation is highlighted. Its definition and elements are clarified. Quantitative and qualitative methods used in process evaluation are described.


13.1 IMPORTANCE OF PROCESS EVALUATION


The importance of conducting a process evaluation in studies examining the effectiveness of health interventions is increasingly recognized as essential in interpreting the results of a particular study (MRC, 2019) and making valid inferences regarding the effectiveness of interventions. In general, the results of an outcome‐focused evaluation study indicate that the intervention is, overall, effective or ineffective; they fall short of delineating evidence on what exactly contributed to the observed improvement or no change in the outcomes following delivery of the intervention (Benson et al., 2018).


When the results support the effectiveness of the health intervention, it is important to know: (1) whether the expected improvement in outcomes can be confidently attributed to the intervention and not to other factors such as the interventionist–participant working alliance (an issue of internal validity); (2) what components of the intervention are most helpful (an issue of construct validity) and least helpful in inducing the hypothesized changes in the ultimate outcomes, and how can the intervention as‐a‐whole or its components be refined to optimize effectiveness; and (3) what contextual factors facilitate the delivery of the intervention and enhance clients’ experience of the ultimate outcomes and, therefore, should be accounted for in future implementation of the intervention in research and practice (issue of external validity) (Evans et al., 2015; Furness et al., 2018; Griffin et al., 2014; Masterson‐Algar et al., 2016; van Bruinessen et al., 2016; Wilkerson et al., 2019).


When the results of an outcome evaluation study do not support the effectiveness of the health intervention, it is important to explain the discrepancies between the hypothesized and the actually observed ultimate outcomes (Grant et al., 2013; Cheng & Metcalfe, 2018; Mars et al., 2013; Van de Glind et al., 2017). The main question guiding the investigation of these discrepancies is: Do the results (i.e. no change in the ultimate outcomes) reflect (1) unsuccessful intervention or theory failure, meaning that the intervention itself does not work or is inherently flawed, inadequate or deficient; or (2) unsuccessful delivery of the intervention or implementation failure, meaning that the intervention is not delivered appropriately, leading to type III error (Biron et al., 2010, 2016; Evans et al., 2015; van Bruinessen et al., 2016).


It may is wise to explore implementation failure in the context of an outcome evaluation study. If ruled out, then intervention theory failure is investigated.


The specific questions informing exploration of implementation failure include:



  1. How well is the intervention delivered by interventionists? How effective is the training in preparing interventionists for providing the intervention? Is the intervention delivered with fidelity and good quality? Is the intervention or any of its components adapted? What is the type of adaptation done? Why, how, for whom, or in what conditions is the adaptation done? (Evans et al., 2015; Nurjono et al., 2018; Sharma et al., 2017; Siddiqui et al., 2018; Tama et al., 2018; van Bruinessen et al., 2016).
  2. What contextual factors influence the delivery of the intervention by interventionists? Are all resources available and accessible? What are other barriers (e.g. sociopolitical, economic, historical, cultural) to the appropriate delivery of the intervention and its components? (Foley et al., 2016; Morgan‐Trimmer, 2015; Siddiqui et al., 2018; Van de Glind et al., 2017).
  3. How do participating clients respond to the intervention? How do they view the intervention and its components? Do participants view the intervention as acceptable, helpful, and are they satisfied with it? Are participants exposed to the full intervention (i.e. all its components and sessions) and if not, why? Is the content covered well understood, received, and useful in enhancing participants’ understanding and management of the health problem? Do participants engage in the intervention activities? What contextual factors affect participants’ engagement and enactment of the intervention? (Al‐HadiHasan et al., 2017; Bakker et al., 2015; Hasson et al., 2012; Morgan‐Trimmer, 2015; Siddiqui et al., 2018).

Specific questions informing investigation of intervention theory failure relate to: What is the capacity of the intervention and its components in initiating the mechanism of action mediating its effects on the ultimate outcomes? Were there potential interactions among the intervention components or between the intervention and some contextual factors that weaken the potency of the intervention in producing the hypothesized changes in the ultimate outcomes? Is it possible that the intervention and its components generated an un‐hypothesized mechanism of action that lead to unintended outcomes.


As implied in the previous questions, the importance of a process evaluation rests on its focus on the intervention itself. Process evaluation provides information on the nature of the intervention (i.e. its constituent components), the extent and quality of its delivery (i.e. fidelity and influence of contextual factors) and participants’ response to the intervention (van Bruinessen et al., 2016). Process data are included in the outcome data analysis to determine what exactly contributed to (lack of) improvement in the ultimate outcomes, in what subgroups of participants, under what context (Pawson & Manzano‐Santaella, 2012; Rycroft‐Malone et al., 2018; Sorensen & Llamas, 2018). Such knowledge is critical for making valid inferences about the effectiveness of the intervention, and for refining it to optimize its effectiveness in different client subgroups and contexts.


13.2 DEFINITION AND ELEMENTS OF PROCESS EVALUATION


Although recognized as important, there is no clear conceptualization of process evaluation. Unclear conceptualization resulted in its inconsistent operationalization, and variability in the specification of its operational elements, within and across fields of study. Five approaches or frameworks have frequently been mentioned in recent literature as informing the conduct of process evaluation; these are summarized in Table 13.1. Different operational elements of fidelity are proposed in the five frameworks. Fidelity of intervention delivery is commonly identified in three frameworks. The Consolidated Framework for Implementation Fidelity developed by Carroll et al. (2007) has guided the examination of the fidelity of intervention delivery in recent studies (Hasson et al., 2012). The Steckler and Linnan’s (2002) framework is mentioned as guiding the design and conduct of several process evaluation studies (Table 13.1). However, it has been criticized for incorporating elements indicating the research process (e.g. recruitment, reach) and elements reflecting the intervention process (e.g. fidelity, dose received), leading to confusion in the conceptualization and the operationalization of process evaluation. Siddiqui et al. (2018) and van Bruinessen et al. (2016) advocate the need to distinguish between the research process and the intervention process and to focus process evaluation on the delivery of the intervention. The focus on the intervention process is also supported by the work of other researchers, who operationalized process evaluation into three main elements: delivery of the intervention, context of delivery, and mechanism of action.


TABLE 13.1 Approaches/frameworks for process evaluation.




























Approach or framework Elements Sources
RE‐AIM Framework Reach = rate of participation and representativeness of clients
Efficacy or Effectiveness
Adoption
Implementation
Maintenance
Grant et al. (2013), Griffin et al. (2014), Planas (2008)
Taxonomy of implementation outcomes (developed by Proctor et al.)—for complex interventions provided in practice Acceptability
Adoption
Appropriateness
Feasibility
Implementation cost
Penetration
Sustainability
Griffin et al. (2014)
Dimensions of process evaluation (developed by Baranowski and Stables) Context = environmental aspects of the intervention setting
Reach = proportion of participants who received the intervention
Fidelity = whether the intervention is delivered as planned
Dose delivered and received = amount of the intervention delivered and extent to which participants respond to it
Implementation = a composite score of reach, dose and fidelity
Recruitment = methods used to attract participants
Griffin et al. (2014)
Steckler and Linnan Framework Recruitment = utility of recruitment procedures; number and characteristics of clients refusing enrollment
Reach = proportion of target client population that participate in the intervention; characteristics of participants; number of participants completing and withdrawing from treatment/study; reasons for withdrawal
Context = characteristics of setting or the practice in which the intervention is given and of interventionists delivering the intervention
Fidelity = extent to which the intervention is delivered as planned
Dose delivered = extent to which the intervention is given at the specified dose (i.e. number of sessions or contacts)
Dose received—exposure = extent of participants’ active engagement and receptiveness of the intervention
Dose received—satisfaction = extent to which participants view the intervention as acceptable and helpful
Den Bakker et al. (2019), Hasson et al. (2012), Masterson‐Algar et al. (2014), Nam et al. (2019), Poston et al. (2013), van Bruinessen et al. (2016), Verwey et al. (2016), Wilkerson et al. (2019).
Medical Research Council (UK)—guidance Fidelity or quantity of intervention implementation
Quality of intervention implementation
Contextual factors influencing delivery and outcomes of the intervention
Causal mechanism or how participants respond to the intervention
MRC (2019)


  1. Delivery of the intervention

Health interventions are provided by interventionists in research and health professionals in practice, or through technology. This element of process evaluation involves:



  • Monitoring the fidelity with which the intervention components are provided: As explained in Chapter 9, fidelity is operationalized as the extent to which the interventionists adhered to the intervention manual.
  • Exploring adaptations of the intervention: The exploration seeks to identify what components (content and activities) are modified, why, how, for whom, and in what conditions; whether the mode and dose of delivery are altered; and whether the adaptations are consistent with the principles explained in the intervention theory for guiding tailoring, and with the allowable changes described in the manual.
  • Assessing the quality of intervention delivery: As mentioned in Chapter 9, quality is defined as the competence of the interventionists or health professionals in providing the intervention and in interacting with clients (Dillon et al., 2018; Evans et al., 2015; Hasson et al., 2012; Haynes et al., 2014; Hogue & Dauber, 2013; Mars et al., 2013; Moore et al., 2015; Nielson & Abildgaard, 2013; Nurjono et al., 2018; Ruikes et al., 2012; Sharma et al., 2017; Siddiqui et al., 2018; Tama et al., 2018; van den Branden et al., 2015; Van de Glind et al., 2017). For technology‐based interventions, fidelity refers to the extent to which the technology functions as planned (Verwey et al., 2016).


  1. Context of delivery

Context includes anything external to the intervention that may act as a barrier or facilitator to its implementation by the interventionists and participants, or through technology (Evans et al., 2015; Fridrich et al., 2015; Grant et al., 2013; Haynes et al., 2014; May et al., 2016; Moore et al., 2015; Nielson & Abildgaard, 2013; Nurjono et al., 2018; Ruikes et al., 2012; Tama et al., 2018). Context or contextual factors thought to influence the implementation of the intervention range from the country and community, which have particular economic, historical, social, cultural, and political characteristics; through site, setting or practice in which the intervention is provided, which possesses specific features related to resources, sociocultural norms underlying interactions among health professionals; to the environment in which the participants apply the treatment recommendations, which has unique physical, psycho‐social and sociocultural attributes (Benson et al., 2018; Morgan‐Trimmer, 2015; Sharma et al., 2017; Van de Glind et al., 2017).


In addition to these contextual factors, some researchers categorize characteristics of the interventionists and the participating clients as contextual factors that play out as barriers or facilitators of intervention implementation (Benson et al., 2018; May et al., 2016; Verwey et al., 2016). As discussed in Chapter 5, the interventionist and client characteristics are stipulated to influence the implementation of the intervention in the intervention theory.



  1. Mechanism

This element of process evaluation is conceptualized and operationalized in two ways. The first is the intervention’s mechanism of action, also referred to as process (Grant et al., 2013; Nurjono et al., 2018). As explained in Chapter 5, the mechanism of action is responsible for the effects of the health intervention on the ultimate outcomes. It is represented by the series of linkages among changes in the immediate, intermediate (or mediators), and ultimate outcomes, as envisioned by Evans et al. (2015), Moore et al. (2015), Sharma et al. (2017), and van de Glind et al. (2017). It is argued that this conceptualization of mechanism, although important, is within the realm of outcome evaluation (Chapter 14).


The second conceptualization of mechanism reflects how clients who are exposed to the intervention react, perceive, and respond to it (Brand et al., 2019; Dalkin et al., 2015; Dillon et al., 2018; Rycroft‐Malone et al., 2018; Van Belle et al., 2016). This conceptualization of mechanism is consistent with the notions of: (1) client responsiveness advanced by Carroll et al. (2007); (2) dose received‐exposure and enactment, and satisfaction identified in Steckler and Linnann’s framework (see Table 13.1 and Chapter 9); (3) perception or appraisal of the intervention as alluded to in the work of Poston et al. (2013), Nielson et al. (2007), and van den Branden et al. (2015) and discussed in Chapter 11. Of particular interest in process evaluation is participants’ responsiveness and satisfaction with the intervention (Carroll et al., 2007; Hasson et al., 2012; Haynes et al., 2014). These two concepts represent participants’ views of the intervention’s processes (Chapter 11). Accordingly, the second conceptualization of mechanism is relevant to process evaluation.


The previously presented points support the integration of the following elements in a process evaluation. The elements are in alignment with the focus of process evaluation on the intervention itself. The elements are:



  • Fidelity and adaptations of the intervention delivery by interventionists.
  • Competence of interventionists in intervention delivery.
  • Contextual factors, operating at the setting level, influencing delivery of the intervention.
  • Client responsiveness, that is, exposure, engagement and enactment of the intervention.
  • Contextual factors, operating within participants’ environment, and affecting exposure, engagement and enactment of the intervention.
  • Perception (or satisfaction) of the intervention by participants.

13.3 METHODS USED IN PROCESS EVALUATION


Process evaluation is conducted in different studies, including preliminary small‐scale studies aimed to examine feasibility of a health intervention, studies focusing on determining the efficacy of a health intervention under controlled conditions, and studies evaluating the effectiveness of an intervention implemented in practice (Benson et al., 2018; Siddiqui et al., 2018). In all cases, the primary concerns are ascertaining the fidelity and the quality of intervention delivery, and identifying factors that affect the delivery. When process evaluation is integrated in an outcome evaluation study, the additional concern is to “unpack the black box” (Wong et al., 2012) to gain a comprehensive understanding of what contribute to the beneficial outcomes, how and under what conditions (Fletcher et al., 2016; Furness et al,. 2018; Nielson & Abildgaard, 2013).


Process evaluation is guided by the intervention theory, which is operationalized in a logic model (Sorensen & Llamas, 2018; Sharma et al., 2017; Van de Glind et al., 2017). As discussed in Chapter 5, the theory clearly defines the intervention’s active ingredients and the respective components, and highlights the components responsible for the interventions effects. Knowledge of the components, including content, activities and treatment recommendations informs the assessment of: the fidelity with which the interventionists deliver the intervention and participating clients’ responsiveness (Masterson‐Algar et al. 2016, 2018; Wells et al., 2012); and participants’ perceptions (satisfaction) of the intervention components. The intervention theory points to key contextual factors, including characteristics of clients, interventionists and setting or environment, that affect the implementation of the intervention. Understanding contextual factors informs their assessment. The intervention theory also specifies the immediate and intermediate outcomes and delineates their associations with specific intervention components, thereby providing guidance for analyzing the contribution of the intervention components to the achievement of outcomes, within and across contexts or settings (Biron et al., 2010, 2016; Sridharan & Nakaima, 2012).


The intervention theory presents conceptual and operational definitions of the intervention components, contextual factors, client responsiveness, and ultimate outcomes. These definitions form the basis for selecting or developing measures of the respective elements of process evaluation. The resulting quantitative data are analyzed descriptively and the proposed relationships are tested using advanced statistical tests. However, to gain a comprehensive and in‐depth understanding of the intervention implementation within and across contexts, qualitative methods are used (Cheng & Metcalfe, 2018). Qualitative methods are useful to explore different stakeholder groups’ (e.g. interventionist, client) views of the intervention, of influential contextual factors, and of the intervention impact, thereby complementing and supplementing the quantitative results. Accordingly, quantitative and qualitative methods are highly valued, and the integration of their respective data is important in enhancing the validity of process evaluation results (De Vlaming et al., 2010; Van de Glind et al., 2017).


Selected quantitative and qualitative methods may have to be adapted to conduct process evaluation of different health interventions. The adaptation is often necessary to accommodate variability in the nature of the intervention components and mode of delivery; the contextual factors operating at the individual client, interventionist, and setting levels; and the perspective of stakeholder groups (including research ethics board), on the methods to be used. For instance, direct observation of intervention delivery may not be relevant for technology‐based health interventions or it may not be acceptable or agreeable to clients (e.g. observation of the provision of intimate care for persons with dementia or of individual sessions addressing personal sexual issues). Furthermore, there are a few established measures of the elements of process evaluation. Those available have to be adapted for consistency with the conceptual and operational definitions of relevant concepts proposed in the intervention theory. Because of this variability, general principles and methods for assessing each element of process evaluation are presented next.


13.3.1 Fidelity of Intervention Delivery by Interventionists


The importance, principles, methods, and checklists for monitoring and assessing the fidelity with which interventionists deliver the intervention have been discussed in detail in Chapter 9. Few points are reiterated.



  • In principle, assessment of fidelity, including adaptations specified in the intervention manual, should be done for each interventionist delivering each session of a health intervention, during the evaluation study period. This is important for an accurate quantification of the level of fidelity and for reaching valid conclusions about the contribution of the intervention to the ultimate outcomes. For technology‐based health interventions, there is no clear guidance on how to determine fidelity; however, Verwey et al. (2016) describe fidelity as the extent to which the technology functions as planned; Mohr et al. (2015) propose to examine its workflow. The workflow determines when specific elements of the intervention are provided through the respective technology, the sequence for delivering them, and the duration.
  • Assessment of fidelity is based on a clear specification of the intervention components, content to be covered and activities to be performed in each session, mode and dose of delivery, as well as allowable adaptations. This information guides the development of two measures for assessing fidelity or adherence to the intervention manual. The first measure is a checklist reflecting the activities to be performed, sequentially, in each intervention session. The checklist is completed by observers and interventionists, and guides the collection of quantitative data on performance of the planned intervention activities (Sharma et al., 2017; Siddiqui et al., 2018). The checklist also includes a section to document variations or additional adaptations in intervention delivery and the conditions surrounding them. The second measure is an abridged version of this checklist, which is completed by participants. It contains the main content and activities planned for each session that are recognizable by participants. The content and activities are described in lay, simple, and easy to understand terms. Table 13.2 illustrates an excerpt of such a measure used by participants to report on the fidelity of the sleep education component of the behavioral therapy for insomnia. Participants are requested to indicate whether or not they were exposed to the component, content or topics were covered, and the activities were performed in each session.
  • Four methods can be used to collect data on fidelity. The first is observation, which can be direct or indirect. In direct observation, the observer attends the delivery of the intervention while assuming a nonparticipant role (Rycroft‐Malone et al., 2018) and documenting the performance of the planned activities on the checklist (Dillon et al., 2018

    Only gold members can continue reading. Log In or Register to continue

    Stay updated, free articles. Join our Telegram channel

Nov 28, 2021 | Posted by in NURSING | Comments Off on 13: Process Evaluation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access