Quantitative Designs for Practice Scholarship


1439


QUANTITATIVE DESIGNS FOR PRACTICE SCHOLARSHIP


SUSAN WEBER BUCHHOLZ


INTRODUCTION


In a quantitative approach, research questions are explored using numerical data for analysis and interpretation to provide an objective means to answer study questions and test hypotheses. Quantitative methods involve determining the purpose of the research that can be addressed with a measurable question, determining an appropriate study design, study implementation, and statistical analysis. Study implementation involves carrying out study procedures in a particular setting, collecting study data, and data management. Statistical analysis of data enables the investigator to determine whether findings were likely to occur by chance alone rather than due to a proposed relationship between variables or effect of an intervention.


This chapter provides an overview of quantitative methods, including defining key variables, measurement, data types, sampling concepts, overall study design, and analysis considerations. Practical aspects in planning and implementing quantitative research in clinical settings are briefly reviewed. Approaches to evaluate clinical as well as statistical significance of quantitative findings are briefly discussed.


STUDY VARIABLES


Every quantitative research study begins with identification of a topic area that can be addressed with a measurable study question. A clear question forms the foundation for study design, is measurable, identifies the population that is being studied, and identifies key variables of interest about that population. Variables are the building blocks of the study. Studying different variables can help determine relationships and/or differences in a population. Suppose, for example, that a nurse researcher is interested in the effect of a smoking cessation program on oncology patients’ smoking habits. The question posed is: Will a smoking cessation program for oncology patients reduce daily tobacco use? Key variables in this question are smoking cessation program and tobacco use. The researcher will also decide who comprises the sample, which is a subset of the population. The research question can be easily transformed to a hypothesis, which predicts the relationship between these two variables. In this example the hypothesis would be: A smoking cessation program will reduce tobacco use in oncology patients. However, it is also important to determine the null hypothesis, which in this case would be: A smoking cessation program does not reduce tobacco use in oncology patients. It is useful to understand how a question can be linked to a hypothesis and null hypothesis in order to later understand how statistical analysis is interpreted.



144FIGURE 9.1Mediator Model.


There are different types of variables that influence the subject of research, including independent, dependent, mediating, or moderating variables (Polit & Beck, 2020). Interrelationships among possible types of variables are shown in Figures 9.1 and 9.2. An independent variable is what the researcher manipulates, and is the presumed cause in the study. A dependent variable is the presumed effect in the study, and may also be termed the outcome variable. A mediating variable is an intervening variable. Mediating variables are factors that affect the interaction between the independent and dependent variables and facilitate, augment, or reduce the effects of the independent variable. The moderating variable influences or affects the relation between variables. Moderating variables are factors that create an interactive effect between the independent and dependent variables. In the previous example, the smoking cessation program would be the independent variable. Tobacco use would be the dependent variable, because the question posed suggests that amount of tobacco used would depend on provision of the smoking cessation program.


In the proposed research study with the smoking cessation program, the astute clinician can determine from a clinical, theoretical and existing literature perspective, about the role that mediating and moderating variables play in tobacco use by an oncology patient. For example, the patient’s confidence in their ability to quit, or the benefits that they perceive from stopping smoking, as well as the barriers that they perceive, could act as mediating variables. The effect of the smoking cessation program on tobacco use may also be mediated by the way in which the program is delivered. Say, for example, the program is provided as a web link that the patient can view at home on their phone or computer, prior to a visit with their healthcare provider. The effectiveness of the program might be mediated by the number of times the patient views it or whether the patient views it with family members and discusses the information provided with 145the program. The participant’s age, type of cancer, and prognosis could influence or moderate the response of the patient to the program. The effects of the program might also be moderated by the patient’s mental state. Statistical analyses are used to analyze the effects of mediators and moderators on an intervention.



FIGURE 9.2Moderator Model.


There are other variables that can have an effect on the outcomes of the study. These are extraneous variables (Gray et al., 2017; Vogt & Johnson, 2016). Extraneous variables represent a condition that is not part of the study design and that the researchers are not interested in measuring, but can still affect the outcome of the study. If an extraneous variable is recognized prior to the study beginning, but the researchers are unable to control for it, or the variable is not recognized until after the study has been initiated, then these represent confounding variables. Randomization is useful in limiting the impact of confounding variables, and statistical techniques such as analysis of covariance can be used to account for the effect of variables. If the design of the study and the statistical analysis cannot account for a confounding variable, then the researcher must note that as a limitation.


MEASUREMENT PROPERTIES


Measurement is the process of assigning numbers to variables. Methods used for measurement are referred to as instrumentation. Data can be obtained through physical measurements, biomedical data, use of questionnaires, and use of patient rating scales, diaries, or responses in structured interviews. Whatever instrumentation is used, major characteristics of measurement that determine its quality and utility include the properties of validity, reliability, sensitivity, and specificity (Gray et al., 2017; Polit & Beck, 2020; Waltz et al., 2017). In planning research, the objective is to select measurement methods that provide valid and reliable results that are also sensitive and specific.


Validity is a broad term that is used to describe the degree to which the instrument used is truly measuring what the researcher intends to measure. The researcher needs to take adequate time to review potential measures to determine if the measure provides sufficient validity for their specific study question. One way to understand the scope of validity is to review it within four different categories: construct validity, statistical conclusion validity, internal validity and external validity (Cook & Campbell, 1979).


Construct validity assesses how adequate an instrument is in measuring the main construct. Construct validity is comprised of translation validity and criterion validity.


Translational validity assesses how well the construct is translated and includes face validity and content validity. Face validity is based on looking at an instrument and determining if it measures what it is supposed to be measuring according to an expert in the field. Content validity assesses if the content adequately represents the “universe” of what is being measured.


Criterion validity assesses the correlation between what the instrument measures and another criterion. There are multiple methods that can be used to assess criterion validity, including concurrent validity, predictive validity, convergent validity, and discriminant validity. With concurrent validity, a researcher will assess the correlation between the instrument and a criterion that is related. With predictive validity, a researcher will assess how well the scores predict a score on another future criterion. With convergent validity, a 146researcher uses different methods to measure the same attribute to assess if they obtain the same results. With discriminant validity, a researcher assesses how easy or difficult it is to be able to differentiate between the construct that they are measuring and a similar construct.


Statistical conclusion validity assesses how accurate to reality the conclusions are about decisions made regarding findings on relationships and differences, based on the statistical evidence.


Internal validity assesses if the independent variable actually was the cause of the observed effect in the dependent variable. Because of the complex nature of human research, there are multiple threats to internal validity that should be addressed as needed. These threats to internal validity are discussed later in the chapter.


External validity is often spoken of as “generalization.” External validity answers the question to what degree the results of the study can be generalized to a specific population. There are several threats to external validity that should be addressed as needed and are also discussed later in the chapter.


Reliability assesses measurement consistency. Reliability evaluates how dependable an instrument is in the measurement of a variable. If a measure is reliable, then it will report the same value for the same item being measured successively. There are three categories of reliability: stability, internal consistency and equivalence. The type of reliability that matters most is related to the study design. For example, in a repeated-measures design, test–retest reliability is important. In a study plan in which multiple observers collect data, it is important to be sure that different people using the same instrument in the same observation would obtain the same result, meaning the tool has good inter-rater reliability.


Stability assesses if the instrument measures the same results repeatedly. Stability is comprised of two measures: test–retest reliability and parallel forms. Test–retest reliability refers to whether or not the same person using the instrument would get the same result with multiple uses. Parallel forms assess the consistency of the results of two tests that are constructed both in the same way and with the same content domain.


Internal consistency is applicable to measurement instruments such as questionnaires and multiple item rating scales. Internal consistency shows the degree to which items on an instrument actually measure a single thing. If internal consistency is high, it suggests that the set of questionnaire items is measuring the same concept or variable. If internal consistency is low, it suggests that individual items within a questionnaire may be measuring different concepts. While interpretation of internal consistency values can be somewhat arbitrary, it can be said that higher values reflect greater reliability. Three different measures for internal consistency include: Cronbach’s alpha, split-half, and Kuder-Richardson. Cronbach’s alpha is a correlation coefficient that estimates internal consistency. Cronbach’s alpha is the statistic used frequently to measure internal consistency. The higher the value for alpha, the greater the internal consistency. Split-half assesses for consistency when a set is divided into two sets, administered separately, and then compared. Kuder-Richardson assesses inter-item consistency by examining the adequacy of content sampling and heterogeneity of the domain that is being sampled.


Equivalence assesses how different observers rate an instrument. Equivalence can be assessed with these three measures: inter-rater reliability, kappa statistic, 147and intraclass correlation. Inter-rater reliability is the degree to which different observers would achieve the same result using the instrument. If different people, using the same tool or making the same observation, come up with very different results, confidence in the measurement and the resulting accuracy of results would be questionable. Kappa statistic assesses an index which compares the agreement against that which a researcher might expect by chance. The value of kappa can be interpreted as the proportion of agreement in results among raters that did not occur by chance alone. Intraclass correlation assesses consistency for a data set when there is more than one group present.


Sensitivity is defined as the number of true positive decisions divided by the number of actual positive cases (the true positives plus the false negatives). It indicates the ability to identify positive results and the degree to which a measurement method is responsive to changes in the variable of interest. Sensitivity also refers to the magnitude of a change that a measure will detect. The more sensitive a measure, the better it will reflect small changes and detect positive findings.


Specificity is the degree to which a result is indicative of a single characteristic and can accurately detect negative results. Mathematically, specificity is the number of true negative decisions divided by the number of actual negative cases (the true negatives plus the false positives). In clinical settings, a receiver operator curve (ROC) is often used to display sensitivity and specificity (Janssens & Martens, 2020).


In addition to validity, reliability, sensitivity, and specificity, assessing a measurement’s feasibility properties is also important. There are a number of factors to consider when examining feasibility. Assessing the actual number of questions including the length of the questions, will allow the researcher to determine if there is undue participant burden in answering questions. Assessing the types of questions that are asked (e.g., multiple choice or open-ended) is important in determining if the type of data received will be useful and meaningful in the analysis. It is useful to assess if the participant is able to complete the questionnaire or if someone else is required to complete it, such as the researcher or a family member. The participant needs to know prior to starting, how long it will take to complete the instrument. If the information is not given at the appropriate literacy level for the population intended, the researcher risks obtaining inaccurate answers. It is also important to assess if an instrument needs to be translated. If an instrument has been translated, the researcher needs to document how the instrument was assessed for language accuracy. Two other important feasibility considerations include ease of administration and scoring, as well as availability of the instrument, and potential cost associated with using the instrument.


There is increased emphasis on inclusion of patient-reported outcomes (PROs) in clinical research and practice. PROs are the patient’s direct report of their own condition, behavior, or experience, without interpretation by a clinician or someone else. The word “patient” is inclusive of not only patients, but also their families and caregivers, as well as consumers. The term is also used to encompass anyone that receives support services. There are four key PRO domains, and these include health-related quality of life, symptoms and symptom burden that are experienced, how people experience care, and specific health behaviors (National Quality Forum, 2020). These domains are measured by a range of various tools that assess how a patient reports their physical, mental, and social well-being. There are different measurement concepts to consider when using PROs (National Quality Forum, 2013). PRO as already noted is the patient-reported outcome. PROMs is the instrument, tool, and/or single-item measure that is used to assess the outcome. PRO-PM is the PRO-based performance measure.


148In appraising research for application in the clinical setting, the quality of measurement methods is a factor that contributes to the confidence in overall results. The type of data that is generated with measurements used will also influence the type of analysis that can be done.


UNDERSTANDING DATA


Data are the discrete values that result from measurement of study variables. Quantitative data are one of four types in terms of numerical scale: nominal, ordinal, interval, or ratio. These scales indicate a hierarchical structure for the data type, where the ratio scale is the highest scale level (Polit & Beck, 2020).


Nominal scale data assign a “name” to each observation. Gender is an example of a nominal scale variable. Examples of nominal scale variables encountered in the clinical setting are gender, diagnosis-related group (DRG) designation, diagnosis codes, and procedure codes. Nominal data are the lowest scale level.


Ordinal scales provide numerical data that have an “order.” The most common examples of ordinal scale data are Likert-type scales, in which one rates an item on a scale such as (a) “poor,” (b) “fair,” (c) “good,” (d) “very good,” or (e) “excellent.” Responses have an order in relationship to each other, such that “fair” is better than “poor,” “very good” is better than “good,” and so on. However, there is no mathematical relationship in this order. One cannot say that “good” is twice the value of “poor” or that “excellent” is twice the value of “very good.” Results from typical patient satisfaction questionnaires used in the clinical setting are examples of ordinal scale data. Ordinal data are above nominal data on the scale hierarchy.


Interval scales provide data in which distances on the numerical scale are equal. The distance between the numbers 1 and 2 is the same as the distance between 2 and 3. The Fahrenheit temperature scale is an example of an interval scale. The difference between a temperature of 100°F and 90°F is the same as the difference between a temperature of 90°F and 80°F. One cannot, however, say that 100° is exactly twice the temperature of 50°. Interval scales refer to equal distances, not a mathematical relationship. Patient ratings on a standard numerical pain symptom distress scale and patient severity scoring systems are examples of interval scales. Interval data are above ordinal data on the hierarchy of data scales.


Ratio scales provide the same information as interval scales, but in addition, have an absolute zero point. This means that mathematically, on a ratio scale, “20” is exactly twice “10,” and “30” is exactly three times “10.” Volume, length, weight, and time are examples of ratio scales.


The scale of the measurement has implications for the type of analysis that is most appropriate. The higher the scale level, according to the hierarchy as described earlier, the broader the range of statistical procedures one can employ. The scale of measurement also determines if you will need nonparametric or parametric statistics. Nonparametric statistics are appropriate when dealing with nominal and ordinal data. Because the sampling distribution of means should produce a normal curve, interval and ratio scale data can be analyzed with parametric statistical procedures.


There is some disagreement about the appropriate statistics that one can use with Likert-type rating scales and questionnaire scores. In some instruments, multiple Likert-type responses can be summed to provide total attribute scores or subscale scores. Some argue that resulting data 149simply order responses, whereas others contend that such scores truly represent the magnitude of the variable being measured, and that items summed for analysis approach the interval scale level. It has been suggested that the statistical analysis used should be driven by the nature of the clinical question, rather than focused on the level of the data (Waltz et al., 2017). One way to determine appropriateness of dealing with such data as interval level is to examine the frequency distribution of responses. If results approximate a normal curve, then parametric statistics may be appropriate.


SAMPLING


Given the many factors that can affect the dependent variable, quantitative research is designed to control, reduce, or account for as many of these factors as practical. This can be done by establishing study sample inclusion and exclusion criteria, including measurement and analysis of potential confounding variables, and overall study design to reduce threats to validity. When a researcher wants to use randomization to select participants for their research, they can employ simple randomization, where, for example, participants can be selected using a random number table. In addition to simple random sampling, there are other sampling strategies that the researcher can use, including systematic, stratified, and cluster sampling techniques (Pedhazur & Schmelkin, 1991; Waltz et al., 2017). Systematic sampling chooses every Kth case, from a list of individuals within a population. Stratified sampling involves subdividing a population into homogeneous groups, in respect to a specific characteristic, and from that group participants are randomly selected. With primary cluster sampling, an inclusive unit of a population is determined, and a sample of those units is then randomly selected.


In planning to study the effects of an intervention on patient anxiety, one might exclude patients who have a major anxiety disorder or cognitive impairment to reduce variation in study findings that could be attributed to these variables. Other variables that experience suggests might affect results, such as age, physical symptoms, diagnosis, and so on, can be measured and included in analysis and interpretation of findings. Selecting a homogeneous sample of patients for a research study is helpful to reduce effects of confounding variables. However, studying a very narrowly defined group of patients limits generalizability of findings to other types of patients.


Sampling decisions can also introduce bias into the study, because the characteristics of patients included can influence results. Selection of a random sample of patients would be expected to provide a sufficient random distribution of characteristics to reduce such bias; however, random sampling is usually not always possible or practical in prospective clinical research.


STUDY DESIGN


The study design is the structure of the research that defines the timing of observations and interventions and strategies used to ensure objectivity. Study designs generally fall into one of three broad categories: nonexperimental, quasi-experimental, and experimental. With a nonexperimental design, the researcher is observing and measuring what is occurring for participants without changing or controlling their situation. With a quasi-experimental design, a researcher manipulates an independent variable, but cannot or does not use random assignment. With an experimental design, the researcher delivers an intervention, and also uses randomization in the assignment of participants to different groups.


150 NONEXPERIMENTAL DESIGNS


Nonexperimental designs are used when a researcher wants to observe and describe what is occurring with a group or group of participants. The researcher does not intervene in manipulating the independent variable for these participants. Nonexperimental designs include descriptive, correlational, and observational designs.


Descriptive designs are used in studies where the researcher seeks to observe and describe what is occurring within a sample from a population. Descriptive studies provide information about commonalities and differences within a defined group of patients. These types of studies are used to identify the incidence or prevalence of conditions, describe a phenomenon, or evaluate relationships among variables explored. The design may be cross-sectional, where data are collected at one specific point in time; retrospective, with information collected from prior participant records or other preexisting sources of data; prospective, with information collected after the study starts; or longitudinal, with data collection at time points in the future as well. For example if a researcher wants to assess how many hospitalizations occur in children due to a diagnosis of respiratory syncytial virus at varying times throughout the year in the same set of hospitals, then they would prospectively and longitudinally track those data.


Correlational designs are used to evaluate relationships or associations among variables. Statistical analysis such as regression and other multivariate techniques can be done to evaluate those variables that may predict or influence results in patient outcomes. Correlational designs can be viewed as a type of descriptive design, because use of this design does not involve the attempt to evaluate causality or effects of an intervention. For example, a researcher might explore the correlation between literacy levels and following a specific procedure on a handout. While the researcher can note if there is or is not a correlation, they cannot infer a causal relationship between those two variables.


Observational designs are used in studies where the outcomes of interest are analyzed between groups of patients that received different interventions. In this design, interventions are not controlled and patients may not be specifically sampled or assigned to treatments. Patients, interventions, and results are described and statistical analysis of data is used to determine differences in outcomes. There are two major types of observational designs: cohort and case control.


Cohort designs involve a group or groups of patients that have one or more common defined characteristics. Cohort designs may be prospective or retrospective. In a prospective cohort study outcomes from a group of patients receiving an intervention, or exposed to some factor, are compared to outcomes from a group of patients who do not receive the intervention, receive a different intervention, or are not exposed. In a retrospective approach, patients receiving an intervention would be compared to a previous group of patients who did not receive the intervention being tested. This is referred to as a group of historical controls. This type of design can be used when usual clinical care varies, or standard approaches for care have changed over time. For example, routine testing procedures for methicillin-resistant Staphylococcus aureus (MRSA) have changed over time in hospitals and a researcher could observe the number of diagnoses made related to MRSA in regard to how the procedure was carried out over time, or in comparison to a group of participants who were not routinely tested.


151Case-control designs examine outcomes from a study group of patients who are exposed to an intervention, or are identified by an outcome of interest, and are compared to a group of patients from the same source population who are not exposed, or do not have the same outcome. In a matched case-control study, study patients are individually matched with patients who have the same characteristics that may influence outcomes, for example disease, age, and gender. This approach reduces the bias introduced as a result of potential confounding variables in an attempt to ensure comparability between cases and controls. For example, if a researcher is studying osteoporosis treatments, they could match participants who received different treatments by age, gender, and history of a specific type of fractures.


In situations in which the independent variable cannot be manipulated, observational studies may be the only option to begin to answer clinical questions about the results of a specific intervention. In this case, it is important to measure and analyze potential intervening and extraneous variables insofar as possible, attempting to rule out other variables that may have produced the results. Clear sample inclusion and exclusion criteria are also essential. Although a single observational type of study cannot provide evidence of cause and effect, it does provide evidence that is stronger than case study level evidence, and if multiple observational studies provide the same conclusions, the synthesis of that information provides stronger evidence about the findings.


QUASI-EXPERIMENTAL STUDIES


Quasi-experimental designs can be constructed with single or multiple groups, and may involve pretest and posttest or posttest-only measurement (Cook & Campbell, 1979; Polit & Beck, 2020). There are several different quasi-experimental designs. Presented here are three of the more common quasi-experimental designs, including one-group pretest–posttest design, nonequivalent control group posttest-only design, and nonequivalent control group pretest–posttest design.


With a one-group pretest–posttest design measurement of the dependent variable is done prior to and then again after an intervention in the same subject or group. Changes in the dependent variable from pre- to postintervention in each subject are compared. In this design, the individual patient essentially functions as their own control. The difference between pre- and postintervention measurement is analyzed to evaluate the impact of the intervention. However, in this type of quasi-experimental design, without comparison to a group that did not have the intervention, there is no way to tell whether changes in the outcome would have occurred in any case. An example of this phenomenon is seen with the trajectory of patient anxiety levels in the course of treatment for cancer. A substantial body of research has shown that over time during active antitumor treatment, in the general population of patients with cancer, levels of anxiety tend to decline. If an intervention aimed to reduce anxiety is given to a single group of patients and then anxiety is later measured, it is possible that any decline in anxiety observed would have occurred anyway, without the intervention.


With a nonequivalent control group, posttest-only design, the intervention is provided and the dependent variable is measured only after the intervention. With this design, the lack of a pretest points to the possibility that any differences between groups may be due to selection bias rather than an intervention effect. If a weight loss intervention is offered at a community center, and a group of participants receive the intervention while another group does not, and no pretest measures are done with the 152groups, it is possible that findings where the group that received the intervention lost more weight may be related other factors. For example if those that did not sign up for the study didn’t know about it because they did not regularly attend the community center health-related activities, they may not have been as invested in their health, or perhaps had other barriers that prevented them from losing weight.


With a nonequivalent control group, pretest–posttest design, participants are assigned to study groups by the researcher or by participant self-selection of the group in which they want to participate. The key limitation to this approach is the fact that patients in the intervention or control groups may have different characteristics that influence the outcome. This can be addressed to some extent in analysis by examination of the differences between the groups at the onset of the study; however, it is likely that the researcher may not have data to compare all of the potential characteristics that function as intervening variables. Another approach is to match patients in an intervention group to those in the control group, either through sample selection or in the analysis of results, based on key characteristics that might influence outcomes. This approach can help reduce unexplained variability; however, it is still unlikely that the researcher can account for every characteristic that might be important. Also, sufficient analysis within subgroups of patients based on these characteristics would require a large overall sample size in order to detect statistically and clinically significant differences.


EXPERIMENTAL DESIGNS


Experimental designs allow researchers to examine a cause-and-effect relationship between the independent and dependent variables (Gray et al., 2017; Polit & Beck, 2020). Participants are randomly assigned to a treatment or control group. This design allows for comparison of outcomes that can overcome the limitations seen in a quasi-experimental design. Random assignment of participants in a randomized controlled design is the generally accepted way to attempt to spread the effect of intervening and extraneous variables across study groups by chance alone. An experimental design typically requires a highly structured and controlled environment. There are multiple procedures that have to be carefully followed when conducting a randomized controlled trial, including allocation concealment. The research team members who are doing the assessments should not be aware to which group the participant has been assigned. Without random assignment there is a potential for selection bias, resulting in selection of particular types of patients to receive particular treatments. There are different types of experimental randomized designs. Presented here are four common experimental designs including the basic pretest-posttest design, multiple intervention design, crossover design, and factorial design.


The basic pretest-posttest design is used when change is being assessed. Participants are measured before and after delivery of an intervention, and are assigned to either a group that receives the intervention or a control group. This design is relatively simplistic for a randomized control trial, and allows the researcher to assess differences in both groups. However, the pretest can potentially influence the outcome variable. This is a good design to use when usual care is adequate, but the researcher wants to test an intervention that they have hypothesized will be more effective than usual care. For example, this design could be used to test the effectiveness of giving diabetic participants in the intervention group counseling sessions with a registered nurse in addition to educational handouts on lifestyle modification, as compared to the control group who receives the usual care delivery of educational handouts.


153With a multiple intervention design, participants are again assessed pre- and postintervention, but different interventions or combinations of interventions are tested, requiring a larger sample. When a researcher is examining multiple components of an intervention, or a complex intervention, this design allows the researcher to test different intervention components. As stated, this design may require a larger sample size than the basic pretest–posttest design. If a researcher has tested two different interventions that were bundled together, and found that they had significant changes, but is uncertain which of the interventions had an impact, then the researcher could disentangle those intervention components and test them separately. For example, this design could be used if a researcher had positive significant results when they tested a multi-component intervention that involved providing group sessions as well as phone calls to help decrease the stress of family members who were taking care of a family member with a chronic illness. If they are uncertain which component was most effective, they could use a multiple intervention design to test the components of group sessions and phone calls separately.


A crossover design is a type of experimental design used to provide an even stronger basis for comparison between study interventions or conditions. Even with well-controlled randomized designs, it is understood that human experience and outcomes can be affected by a huge number of personal, environment, social, and other factors. In a crossover design, individual patients are exposed to both the control and experimental conditions, and results from each condition in all patients are compared. Subjects are usually randomly assigned to the sequence in which the intervention or control condition will be provided. One of the major threats in this design is contamination due to continued effects of either treatment condition. The degree of threat depends on the nature of the intervention. In clinical trials involving medications, patients have a “wash-out” period between study conditions, so that the effects of one medication are no longer present before the patient is exposed to the other medication. With other types of interventions, an appropriate amount of time needed to eliminate intervention effects may not be clear, or may not be possible.


In a factorial design two interventions are tested for simultaneously. This is a useful design for testing interaction effects. A factorial design allows the researcher to test for two different interventions at the same time. However, a large sample size may be required, as compared to testing each intervention separately. A factorial design is particularly useful for a researcher who is expecting that combining different intervention components together will have a synergistic effect. If, for example, a researcher has developed four different interventions for improving handwashing in the hospital by healthcare professionals, and theorizes that when combining them together in sets of two, one of the four combinations is more likely to be effective, then they could use a factorial design to test these different combinations.


The randomized controlled trial (RCT) is generally viewed as the most valid approach and the only true experimental design. Certain types of studies, such as pharmaceutical studies, consistently use an experimental design. It is important, however, to recognize that not all study questions can be subjected to this approach. Nonexperimental or quasi-experimental designs are often more practical in the clinical setting.


Representation of Study Designs


The study design can be visually represented to show the sequence of observations and interventions in a variety of ways. The X–O model uses an “X” to indicate the intervention, or independent 154variable, and an “O” to indicate the observation to measure the dependent outcome variable. Although this is not an exhaustive list, shown in Table 9.1 are more the commonly used examples of the X–O model that illustrate study designs.


Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 17, 2021 | Posted by in NURSING | Comments Off on Quantitative Designs for Practice Scholarship

Full access? Get Clinical Tree

Get Clinical Tree app for offline access