Understanding Quantitative Research Design



Understanding Quantitative Research Design


imagehttp://evolve.elsevier.com/Grove/practice/


A research design is the blueprint for conducting a study. It maximizes control over factors that could interfere with the validity of the study findings. Being able to identify the study design and to evaluate design flaws that might threaten the validity of findings is an important part of critically appraising studies. When you are conducting a study, the research design guides you in planning and implementing the study in a way to achieve accurate results. The control achieved through the quantitative study design increases the probability that your study findings are an accurate reflection of reality.


The term research design is used in two ways in the nursing literature. Some consider research design to be the entire strategy for the study, from identification of the problem to final plans for data collection. Others limit design to clearly defined structures within which the study is implemented. In this text, the first definition refers to the research methodology and the second is a definition of the research design. The research design of a study is the end result of a series of decisions you will make concerning how best to implement your study. The design is closely associated with the framework of the study. As a blueprint, the design is not specific to a particular study but rather is a broad pattern or guide that can be applied to many studies (see Chapter 11 for different types of quantitative research designs). Just as the blueprint for a house must be individualized to the house being built, so the design must be made specific to a study. Using the problem statement, framework, research questions, and clearly defined variables, you can map out the design to achieve a detailed research plan for collecting and analyzing data.


This chapter gives you a background for understanding the elements of a design and critically appraising the designs in published quantitative studies. You are introduced to (1) the concepts important to design; (2) design validity; and (3) the elements of a good design. You are also provided questions to assist you in selecting and implementing a design in a study. The chapter concludes with a discussion of mixed methods, which are relatively recent approaches used in nursing that combine quantitative and qualitative research designs.



Concepts Important to Design


Many terms used in discussing research design have special meanings within this context. An understanding of these concepts is essential for recognizing the purpose of a specific design. Some of the major concepts used in relation to research design are causality, bias, manipulation, control, and validity.



Causality


The first assumption you must make in examining causality is that causes lead to effects. Some of the ideas related to causation emerged from the logical positivist philosophical tradition. Hume, a positivist, proposed that the following three conditions must be met to establish causality (1) there must be a strong relationship between the proposed cause and the effect; (2) the proposed cause must precede the effect in time; and (3) the cause has to be present whenever the effect occurs. Cause, according to Hume, is not directly observable but must be inferred (Kerlinger & Lee, 2000; Shadish, Cook, & Campbell, 2002).


A philosophical group known as essentialists proposed that two concepts must be considered in determining causality: necessary and sufficient. The proposed cause must be necessary for the effect to occur. (The effect cannot occur unless the cause first occurs.) The proposed cause must also be sufficient (requiring no other factors) for the effect to occur. This leaves no room for a variable that may sometimes, but not always, serves as the cause of an effect. John Stuart Mill, another philosopher, added a third idea related to causation. He suggested that, in addition to the preceding criteria for causation, there must be no alternative explanations for why a change in one variable seems to lead to a change in a second variable (Campbell & Stanley, 1963).


Causes are frequently expressed within the propositions of a theory. Testing the accuracy of these theoretical statements indicates the usefulness of the theory (Fawcett & Garity, 2009). A theoretical understanding of causation is considered important because it improves our ability to predict and, in some cases, to control events in the real world. The purpose of an experimental design is to examine cause and effect. The independent variable in a study is expected to be the cause, and the dependent variable is expected to reflect the effect of the independent variable.



Multicausality


Multicausality, the recognition that a number of interrelating variables can be involved in causing a particular effect, is a more recent idea related to causality. Because of the complexity of causal relationships, a theory is unlikely to identify every variable involved in causing a particular phenomenon. A study is unlikely to include every component influencing a particular change or effect.


Cook and Campbell (1979) have suggested three levels of causal assertions that one must consider in establishing causality. Molar causal laws relate to large and complex objects. Intermediate mediation considers causal factors operating between molar and micro levels. Micromediation examines causal connections at the level of small particles, such as atoms. Cook and Campbell (1979) used the example of turning on a light switch, which causes the light to come on (molar). An electrician would tend to explain the cause of the light coming on in terms of wires and electrical current (intermediate mediation). However, the physicist would explain the cause of the light coming on in terms of ions, atoms, and subparticles (micromediation).


The essentialists’ ideas of necessary and sufficient do not hold up well when one views a phenomenon from the perspective of multiple causation. The light switch may not be necessary to turn on the light if the insulation has worn off the electrical wires. Additionally, even though the switch is turned on, the light will not come on if the light bulb is burned out. Although this is a concrete example, it is easy to relate it to common situations in nursing.


Few phenomena in nursing can be clearly reduced to a single cause and a single effect. However, the greater the proportion of causal factors that can be identified and explored, the clearer the understanding of the phenomenon. This greater understanding improves our ability to predict and control. For example, currently nurses have a limited understanding of patients’ preoperative attitudes, knowledge, and behaviors and their effects on postoperative attitudes and behaviors. Nurses assume that high preoperative anxiety leads to less healthy postoperative responses and that providing information before surgery improves healthy responses in the postoperative period. Many nursing studies have examined this particular phenomenon. However, the causal factors involved are complex and have not been clearly delineated. The research evidence needed to reduce patients’ anxiety and improve their postoperative recovery is still evolving.



Probability


The original criteria for causation required that a variable should have an identified effect each time the cause occurred. Although this criterion may apply in the basic sciences, such as chemistry or physics, it is unlikely to apply in the health sciences or social sciences. Because of the complexity of the nursing field, nurses deal in probabilities. Probability addresses relative, rather than absolute, causality. From the perspective of probability, a cause will not produce a specific effect each time that particular cause occurs.


Reasoning changes when one thinks in terms of probabilities. The researcher investigates the probability that an effect will occur under specific circumstances. Rather than seeking to prove that A causes B, a researcher would state that if A occurs, there is a 50% probability that B will occur. The reasoning behind probability is more in keeping with the complexity of multicausality. In the example about preoperative attitudes and postoperative outcomes, nurses could seek to predict the probability of unhealthy postoperative patient outcomes when preoperative anxiety levels are high.



Causality and Nursing Philosophy


Traditional theories of prediction and control are built on theories of causality. The first research designs were also based on causality theory. Nursing science must be built within a philosophical framework of multicausality and probability. The strict senses of single causality and of “necessary and sufficient” are not in keeping with the progressively complex, holistic philosophy of nursing. To understand multicausality and increase the probability of being able to predict and control the occurrence of an effect, researchers need to comprehend both wholes and parts (Fawcett & Garity, 2009; Shadish et al., 2002).


Practicing nurses must be aware of the molar, intermediate mediational, and micromediational aspects of a particular phenomenon. A variety of differing approaches, reflecting both qualitative and quantitative research, are necessary to develop a knowledge base for nursing. Some see explanation and causality as different and perhaps opposing forms of knowledge. Nevertheless, the nurse must join these forms of knowledge, sometimes within the design of a single study, to acquire the knowledge needed for nursing practice (Creswell, 2009; Marshall & Rossman, 2011).



Bias


The term bias means to slant away from the true or expected. A biased opinion has failed to include both sides of the question. A biased witness is one who is strongly for or against one side of the situation. A biased scale is one that does not provide a valid measurement of a concept.


Bias is of great concern in research because of the potential effect on the meaning of the study findings. Any component of the study that deviates or causes a deviation from true measure leads to error and distorted findings. Many factors related to research can be biased: the researcher, the measurement methods, the individual subjects, the sample, the data, and the statistics (Grove, 2007; Thompson, 2002; Waltz, Strickland, & Lenz, 2010). In critically appraising a study, you need to look for possible biases in these areas. An important concern in designing a study is to identify possible sources of bias and eliminate or avoid them. If they cannot be avoided, you need to design your study to control them. Designs, in fact, are developed to reduce the possibilities of bias (Shadish et al., 2002).



Manipulation


Manipulation tends to have a negative connotation and is associated with one person underhandedly maneuvering a second person so that he or she behaves or thinks in the way the first person desires. Denotatively, to manipulate means to move around or to control the movement of something, such as manipulating a syringe to give an injection. The major role of nurses is to implement interventions that involve manipulation of events related to patients and their environment to improve their health. Manipulation has a specific meaning when used in experimental or quasi-experimental research because it is the manipulation or implementation of the study treatment or intervention. The experimental group receives the treatment or intervention during a study and the control group does not. For example, in a study on preoperative care, preoperative relaxation therapy might be manipulated so that the experimental group receives the treatment and the control group does not. In a study on oral care, the frequency of care might be manipulated to determine its effect on patient outcomes (Doran, 2011).


In nursing research, when experimental designs are used to explore causal relationships, the nurse must be free to manipulate the variables under study. For example, in a study of pain management, if the freedom to manipulate pain control measures is under the control of someone else, a bias is introduced into the study. In qualitative, descriptive, and correlational studies, the researcher does not attempt to manipulate variables. Instead, the purpose is to describe a situation as it exists (Marshall & Rossman, 2011; Munhall, 2012).




Study Validity


Study validity, a measure of the truth or accuracy of a claim, is an important concern throughout the research process. Study validity is central to building sound evidence for practice. Questions of validity refer back to the propositions from which the study was developed and address their approximate truth or falsity. Is the theoretical proposition from the study framework an accurate reflection of reality? Was the study designed to provide a valid test of the proposition? Validity is a complex idea that is important to the researcher and to those who read the study report and consider using the findings in their practice. Critical appraisal of research requires that we think through threats to validity and make judgments about how seriously these threats affect the integrity of the findings. Validity provides a major basis for making decisions about which findings are sufficiently valid to add to the evidence base for practice.


Shadish et al. (2002) have described four types of validity: statistical conclusion validity, internal validity, construct validity, and external validity. These types of design validity need to be critically appraised for strengths and possible threats in published studies. When conducting a study, you will be confronted with major decisions about the four types of design validity. To make these decisions, you must address a variety of questions, such as the following:




Statistical Conclusion Validity


The first step in inferring cause is to determine whether the independent and dependent variables are related. You can determine this relationship (covariation) through statistical analysis. Statistical conclusion validity is concerned with whether the conclusions about relationships or differences drawn from statistical analysis are an accurate reflection of the real world. The second step is to identify differences between groups. There are reasons why false conclusions can be drawn about the presence or absence of a relationship or difference. The reasons for the false conclusions are called threats to statistical conclusion validity. These threats are described in the following section.



Low Statistical Power


Low statistical power increases the probability of concluding that there is no significant difference between samples when actually there is a difference (Type II error, failing to reject a false null) (see Chapter 8 for discussion of the null hypothesis). A Type II error is most likely to occur when the sample size is small or when the power of the statistical test to determine differences is low (Aberson, 2010). The concept of statistical power and strategies to improve it are discussed in Chapters 15 and 21.




Fishing and the Error Rate Problem


A serious concern in research is incorrectly concluding that a relationship or difference exists when it does not (Type I error, rejecting a true null). The risk of Type I error increases when the researcher conducts multiple statistical analyses of relationships or differences; this procedure is referred to as fishing. When fishing is used, a given portion of the analyses shows significant relationships or differences simply by chance. For example, the t-test is commonly used to make multiple statistical comparisons of mean differences in a single sample (Kerlinger & Lee, 2000). This procedure increases the risk of a Type I error because some of the differences found in the sample occurred by chance and are not actually present in the population. Multivariate statistical techniques have been developed to deal with this error rate problem (Goodwin, 1984). Fishing and error rate problems are discussed in Chapter 21.




Reliability of Intervention Implementation


Intervention reliability ensures that the research treatment is standardized and applied consistently each time it is administered in a study. In some studies, the consistent implementation of the treatment is referred to as intervention fidelity. Intervention fidelity often includes a protocol to standardize the elements of the treatment and a plan for training to ensure consistent implementation of the treatment protocol (Forbes, 2009; Santacroce, Maccarelli, & Grey, 2004). If the method of administering a research intervention varies from one person to another, the chance of detecting a true difference decreases. During the planning and implementation phases, researchers must ensure that the study intervention is provided in exactly the same way each time it is administered to prevent a threat to statistical conclusion design validity. Chapter 14 provides a detailed discussion of types of interventions, intervention development, and intervention fidelity.





Internal Validity


Internal validity is the extent to which the effects detected in the study are a true reflection of reality rather than the result of extraneous variables. Although internal validity should be a concern in all studies, it is addressed more commonly in relation to studies examining causality than in other studies. When examining causality, the researcher must determine whether the independent and dependent variables may have been affected by a third, often unmeasured, variable (an extraneous variable). Chapter 8 describes the different types of extraneous variables. The possibility of an alternative explanation of cause is sometimes referred to as a rival hypothesis (Shadish et al., 2002). Any study can contain threats to internal design validity, and these validity threats can lead to false-positive or false-negative conclusions. The researcher must ask, “Is there another reasonable (valid) explanation (rival hypothesis) for the finding other than the one I have proposed?” Threats to internal validity are described here.







Selection


Selection addresses the process by which subjects are chosen to take part in a study and how subjects are grouped within a study. A selection threat is more likely to occur in studies in which randomization is not possible (Kerlinger & Lee, 2000; Thompson 2002). In some studies, people selected for the study may differ in some important way from people not selected for the study. In other studies, the threat is due to differences in subjects selected for study groups. For example, people assigned to the control group could be different in some important way from people assigned to the experimental group. This difference in selection could cause the two groups to react differently to the treatment; in this case, the treatment would not have caused the differences in group responses.





Diffusion or Imitation of Treatments


The control group may gain access to the treatment intended for the experimental group (diffusion) or a similar treatment available from another source (imitation). For example, suppose your study examined the effect of teaching specific information to hypertensive patients as a treatment and then measured the effect of the teaching on blood pressure readings and adherence to treatment protocols. Suppose that the experimental group patients shared the teaching information with the control patients (treatment diffusion). This sharing changed the behavior of the control group. The control group patients’ responses to the outcome measures may show no differences from those of the experimental group even though the teaching actually did make a difference (Type II error; fail to reject a false null).





Construct Validity


Construct validity examines the fit between the conceptual definitions and operational definitions of variables. Theoretical constructs or concepts are defined within the study framework (conceptual definitions). These conceptual definitions provide the basis for the operational definitions of the variables. Operational definitions (methods of measurement) must validly reflect the theoretical constructs. (Theoretical constructs are discussed in Chapter 7; conceptual and operational definitions of concepts and variables are discussed in Chapter 8.)


Is use of the measure a valid inference about the construct? By examining construct validity, we can determine whether the instrument actually measures the theoretical construct it purports to measure. The process of developing construct validity for an instrument often requires years of scientific work. When selecting methods of measurement, the researcher must determine the previous development of instrument construct validity (DeVon et al., 2007; Waltz et al., 2010). The threats to construct validity are related both to previous instrument development and to the development of measurement techniques as part of the methodology of a particular study. Threats to construct validity are described here.




Mono-Operation Bias


Mono-operation bias occurs when only one method of measurement is used to assess a construct. When only one method of measurement is used, fewer dimensions of the construct are measured. Construct validity greatly improves if the researcher uses more than one instrument (Waltz et al., 2010). For example, if anxiety were a dependent variable, more than one measure of anxiety could be used. It is often possible to apply more than one measurement of the dependent variable with little increase in time, effort, or cost.




Stay updated, free articles. Join our Telegram channel

Feb 17, 2017 | Posted by in NURSING | Comments Off on Understanding Quantitative Research Design

Full access? Get Clinical Tree

Get Clinical Tree app for offline access