The first assumption you must make in examining causality is that causes lead to effects. Some of the ideas related to causation emerged from the logical positivist philosophical tradition. Hume, a positivist, proposed that the following three conditions must be met to establish causality (1) there must be a strong relationship between the proposed cause and the effect; (2) the proposed cause must precede the effect in time; and (3) the cause has to be present whenever the effect occurs. Cause, according to Hume, is not directly observable but must be inferred (Kerlinger & Lee, 2000; Shadish, Cook, & Campbell, 2002). Cook and Campbell (1979) have suggested three levels of causal assertions that one must consider in establishing causality. Molar causal laws relate to large and complex objects. Intermediate mediation considers causal factors operating between molar and micro levels. Micromediation examines causal connections at the level of small particles, such as atoms. Cook and Campbell (1979) used the example of turning on a light switch, which causes the light to come on (molar). An electrician would tend to explain the cause of the light coming on in terms of wires and electrical current (intermediate mediation). However, the physicist would explain the cause of the light coming on in terms of ions, atoms, and subparticles (micromediation). Traditional theories of prediction and control are built on theories of causality. The first research designs were also based on causality theory. Nursing science must be built within a philosophical framework of multicausality and probability. The strict senses of single causality and of “necessary and sufficient” are not in keeping with the progressively complex, holistic philosophy of nursing. To understand multicausality and increase the probability of being able to predict and control the occurrence of an effect, researchers need to comprehend both wholes and parts (Fawcett & Garity, 2009; Shadish et al., 2002). Practicing nurses must be aware of the molar, intermediate mediational, and micromediational aspects of a particular phenomenon. A variety of differing approaches, reflecting both qualitative and quantitative research, are necessary to develop a knowledge base for nursing. Some see explanation and causality as different and perhaps opposing forms of knowledge. Nevertheless, the nurse must join these forms of knowledge, sometimes within the design of a single study, to acquire the knowledge needed for nursing practice (Creswell, 2009; Marshall & Rossman, 2011). Bias is of great concern in research because of the potential effect on the meaning of the study findings. Any component of the study that deviates or causes a deviation from true measure leads to error and distorted findings. Many factors related to research can be biased: the researcher, the measurement methods, the individual subjects, the sample, the data, and the statistics (Grove, 2007; Thompson, 2002; Waltz, Strickland, & Lenz, 2010). In critically appraising a study, you need to look for possible biases in these areas. An important concern in designing a study is to identify possible sources of bias and eliminate or avoid them. If they cannot be avoided, you need to design your study to control them. Designs, in fact, are developed to reduce the possibilities of bias (Shadish et al., 2002). In nursing research, when experimental designs are used to explore causal relationships, the nurse must be free to manipulate the variables under study. For example, in a study of pain management, if the freedom to manipulate pain control measures is under the control of someone else, a bias is introduced into the study. In qualitative, descriptive, and correlational studies, the researcher does not attempt to manipulate variables. Instead, the purpose is to describe a situation as it exists (Marshall & Rossman, 2011; Munhall, 2012). 1. Is there a relationship between the two variables? (statistical conclusion validity) 2. Given that there is a relationship, is it possibly causal from the independent variable to the dependent variable, or would the same relationship have been obtained in the absence of any treatment or intervention? (internal validity) 3. Given that the relationship is probably causal and is reasonably known to be from one variable to another, what are the particular cause-and-effect constructs involved in the relationship? (construct validity) 4. Given that there is probably a causal relationship from construct A to construct B, can this relationship be generalized across persons, settings, and times? (external validity) (Cook & Campbell, 1979; Shadish et al., 2002) Low statistical power increases the probability of concluding that there is no significant difference between samples when actually there is a difference (Type II error, failing to reject a false null) (see Chapter 8 for discussion of the null hypothesis). A Type II error is most likely to occur when the sample size is small or when the power of the statistical test to determine differences is low (Aberson, 2010). The concept of statistical power and strategies to improve it are discussed in Chapters 15 and 21. Most statistical tests have assumptions about the data collected, such as the following: (1) the data are at least at the interval level; (2) the sample was randomly obtained; and (3) the distribution of scores was normal. If these assumptions are violated, the statistical analysis may provide inaccurate results (Corty, 2007; Grove, 2007). The assumptions of statistical tests commonly conducted in nursing studies are provided in Chapters 23, 24, and 25. A serious concern in research is incorrectly concluding that a relationship or difference exists when it does not (Type I error, rejecting a true null). The risk of Type I error increases when the researcher conducts multiple statistical analyses of relationships or differences; this procedure is referred to as fishing. When fishing is used, a given portion of the analyses shows significant relationships or differences simply by chance. For example, the t-test is commonly used to make multiple statistical comparisons of mean differences in a single sample (Kerlinger & Lee, 2000). This procedure increases the risk of a Type I error because some of the differences found in the sample occurred by chance and are not actually present in the population. Multivariate statistical techniques have been developed to deal with this error rate problem (Goodwin, 1984). Fishing and error rate problems are discussed in Chapter 21. The technique of measuring variables must be reliable to reveal true differences. A measure is a reliable measure if it gives the same result each time the same situation or factor is measured. If a scale is used to measure anxiety, it should give the same score (be reliable) if repeatedly given to the same person in a short time (unless, of course, repeatedly taking the same test causes anxiety to increase or decrease) (Waltz et al., 2010). Physiological measurement methods that consistently measure physiological variables are considered precise (Ryan-Wenger, 2010). For example, a thermometer would be precise if it showed the same temperature reading when tested repeatedly on the same patient within a limited time (see Chapter 16). Intervention reliability ensures that the research treatment is standardized and applied consistently each time it is administered in a study. In some studies, the consistent implementation of the treatment is referred to as intervention fidelity. Intervention fidelity often includes a protocol to standardize the elements of the treatment and a plan for training to ensure consistent implementation of the treatment protocol (Forbes, 2009; Santacroce, Maccarelli, & Grey, 2004). If the method of administering a research intervention varies from one person to another, the chance of detecting a true difference decreases. During the planning and implementation phases, researchers must ensure that the study intervention is provided in exactly the same way each time it is administered to prevent a threat to statistical conclusion design validity. Chapter 14 provides a detailed discussion of types of interventions, intervention development, and intervention fidelity. Internal validity is the extent to which the effects detected in the study are a true reflection of reality rather than the result of extraneous variables. Although internal validity should be a concern in all studies, it is addressed more commonly in relation to studies examining causality than in other studies. When examining causality, the researcher must determine whether the independent and dependent variables may have been affected by a third, often unmeasured, variable (an extraneous variable). Chapter 8 describes the different types of extraneous variables. The possibility of an alternative explanation of cause is sometimes referred to as a rival hypothesis (Shadish et al., 2002). Any study can contain threats to internal design validity, and these validity threats can lead to false-positive or false-negative conclusions. The researcher must ask, “Is there another reasonable (valid) explanation (rival hypothesis) for the finding other than the one I have proposed?” Threats to internal validity are described here. Selection addresses the process by which subjects are chosen to take part in a study and how subjects are grouped within a study. A selection threat is more likely to occur in studies in which randomization is not possible (Kerlinger & Lee, 2000; Thompson 2002). In some studies, people selected for the study may differ in some important way from people not selected for the study. In other studies, the threat is due to differences in subjects selected for study groups. For example, people assigned to the control group could be different in some important way from people assigned to the experimental group. This difference in selection could cause the two groups to react differently to the treatment; in this case, the treatment would not have caused the differences in group responses. Construct validity examines the fit between the conceptual definitions and operational definitions of variables. Theoretical constructs or concepts are defined within the study framework (conceptual definitions). These conceptual definitions provide the basis for the operational definitions of the variables. Operational definitions (methods of measurement) must validly reflect the theoretical constructs. (Theoretical constructs are discussed in Chapter 7; conceptual and operational definitions of concepts and variables are discussed in Chapter 8.) Is use of the measure a valid inference about the construct? By examining construct validity, we can determine whether the instrument actually measures the theoretical construct it purports to measure. The process of developing construct validity for an instrument often requires years of scientific work. When selecting methods of measurement, the researcher must determine the previous development of instrument construct validity (DeVon et al., 2007; Waltz et al., 2010). The threats to construct validity are related both to previous instrument development and to the development of measurement techniques as part of the methodology of a particular study. Threats to construct validity are described here.
Understanding Quantitative Research Design
evolve.elsevier.com/Grove/practice/
Concepts Important to Design
Causality
Multicausality
Causality and Nursing Philosophy
Bias
Manipulation
Study Validity
Statistical Conclusion Validity
Low Statistical Power
Violated Assumptions of Statistical Tests
Fishing and the Error Rate Problem
Reliability of Measures
Reliability of Intervention Implementation
Internal Validity
Selection
Construct Validity
You may also need
Full access? Get Clinical Tree
Understanding Quantitative Research Design
Only gold members can continue reading. Log In or Register to continue
