Language and Thinking Processes
Key terms
Bias
Conceptual definition
Context specificity
Control
Dependent variable
Design
Emic
Etic
External validity
Independent variable
Instrumentation
Internal validity
Intervening variable
Manipulation
Operational definition
Rigor
Statistical conclusion validity
Validity
Variable
We now are ready to explore the language and thinking of researchers who use the range of designs relevant to health and human service inquiry. As discussed in previous chapters, significant philosophical differences exist between experimental-type and naturalistic research traditions. Experimental-type designs are characterized by thinking and action processes based in deductive logic and a positivist paradigm in which the researcher seeks to identify
a single reality through systematized observation. This reality is understood by reducing it to its parts, observing and measuring the parts, and then examining the relationship among these parts. Ultimately, the purpose of research in the experimental-type tradition is to predict what will occur in one part of the universe by knowing and observing another part.
Unlike experimental-type design, the naturalistic tradition is characterized by multiple ontological and epistemological foundations. However, naturalistic designs share common elements that are reflected in their languages and thinking processes. Researchers in the naturalistic tradition base their thinking in inductive and abductive logic and seek to understand phenomena within the context in which they are embedded. Thus, the notion of multiple realities and the attempt to characterize holistically the complexity of human experience are two elements that pervade naturalistic approaches.
Mixed-method designs draw on and integrate the language and thinking of both traditions.
Now we turn to the philosophical foundations, language, and criteria for scientific rigor for experimental-type and naturalistic traditions. “Rigor” is a term used in research that refers to procedures that enhance and are used to judge the integrity of the research design. As you read this chapter, compare and contrast the two traditions and consider the application and integration of both.
Experimental-type language and thinking processes
Within the experimental-type research tradition, there is consensus about the adequacy and scientific rigor of action processes. Thus, all designs in the experimental-type tradition share a common language and a unified perspective as to what constitutes an adequate design. Although there are many definitions of research design across the range of approaches, all types of experimental-type research share the same fundamental elements and a single, agreed-on meaning (Box 8-1).
In experimental-type research, design is the plan or blueprint that specifies and structures the action processes of collecting, analyzing, and reporting data to answer a research question. As Kerlinger stated in his classic definition, design is “the plan, structure, and strategy of investigation conceived so as to obtain answers to research questions and to control variance.”1 “Plan” refers to the blueprint for action or the specific procedures used to obtain empirical evidence. “Structure” represents a more complex concept and refers to a model of the relationships among the variables of a study. That is, the design is structured in such a way as to enable an examination of a hypothesized relationship among variables. This relationship is articulated in the research question. The main purpose of the design is to structure the study so that the researcher can answer the research question.
In the experimental-type tradition, the purpose of the design is to control variances or restrict or control extraneous influences on the study. By exerting such control, the researcher can state with a degree of statistical assuredness that study outcomes are a consequence of either the manipulation of the independent variable (e.g., true experimental design) or the consequence of that which was observed and analyzed (e.g., nonexperimental design). In other words, the design provides a degree of certainty that an investigator’s observations are not haphazard or random but reflect what is considered to be a true and objective reality. The researcher is thus concerned with developing the most optimal design that eliminates or controls what researchers refer to as disturbances, variances, extraneous factors, or situational contaminants. The design controls these disturbances or situational contaminants through the implementation of systematic procedures and data collection efforts, as discussed in subsequent chapters. The purpose of imposing control and restrictions on observations of natural phenomena is to ensure that the relationships specified in the research question(s) can be identified, understood, and ultimately predicted.
The element of design is what separates research from the everyday types of observations and thinking and action processes in which each of us engages. Design instructs the investigator to “do this” or “don’t do that.” It provides a mechanism of control to ensure that data are collected objectively, in a uniform and consistent manner, with minimal investigator involvement or bias. The important points to remember are that the investigator remains separate from and uninvolved with the phenomena under study (e.g., to control one important potential source of disturbance or situational contaminant) and that procedures and systematic data collection provide mechanisms to control and eliminate bias.
Sequence of Experimental-Type Research
Design is pivotal in the sequence of thoughts and actions of experimental-type researchers (Figure 8-1). It stems from the thinking processes of formulating a problem statement, a theory-specific research question that emerges from scholarly literature, and hypotheses or expected outcomes. Design dictates the nature of the action processes of data collection, the conditions under which observations will be made, and, most important, the type of data analyses and reporting that will be possible.
Do you recall our previous discussion on the essentials of research? In that discussion, we illustrated how a problem statement indicates the purpose of the research and the broad topic the investigator wants to address. In experimental-type inquiry the literature review guides the selection of theoretical principles and concepts of the study and provides the rationale for a research project. This rationale is based on the nature of research previously conducted and the level of theory development for the phenomena under investigation. The experimental-type researcher must develop a literature review in which the specific theory, scope of the study, research questions, concepts to be measured, nature of the relationship among concepts, and measures that will be used in the study are discussed and supported. Thus, the researcher develops a design that builds on both the ideas that have been formulated and the actions that have been conducted in ways that conform to the rules of scientific rigor.
The choice of design not only is shaped by the literature and level of theory development but is also dependent on the specific question asked and resources or the practical constraints, such as access to target populations and monetary and staff considerations. There is nothing inherently good or bad about a design. Every research study design has its particular strengths and weaknesses. The adequacy of a design is based on how well the design answers the research question that is posed. That is the most important criteria for evaluating a design. If it does not answer the research question, then the design, regardless how rigorous it may appear, is not appropriate. It is also important to identify and understand the relative strength and weakness of each design element. Methodological decisions are purposeful and should be made with full recognition of what is gained and what is not by implementing each design element.
Structure of Experimental-Type Research
Experimental-type research has a well-developed language that sets clear rules and expectations for the adequacy of design and research procedures. As you see in Table 8-1, nine key terms structure experimental-type research designs. Let us examine the meaning of each.
TABLE 8-1
Key Terms in Structuring Experimental-Type Research
Term | Definition |
Concept | Symbolically represents observation and experience |
Construct | Represents a model of relationships among two or more concepts |
Conceptual definition | Concept expressed in words |
Operational definition | How the concept will be measured |
Variable | Operational definition of a concept assigned numerical values |
Independent variable | Presumed cause of the dependent variable |
Intervening variable | Phenomenon that has an effect on study variables |
Dependent variable | Phenomenon that is affected by the independent variable or is the presumed effect or outcome |
Hypothesis | Testable statement that indicates what the researcher expects to find |
Concepts
A concept is defined as the words or ideas that symbolically represent observations and experiences. Concepts are not directly observable; rather, what they describe is observed or experienced. Concepts are “(1) tentative, (2) based on agreement, and (3) useful only to the degree that they capture or isolate something significant and definable.”2 For example, the terms “grooming” and “work” are both concepts that describe specific observable or experienced activities in which people engage on a regular basis. Other concepts, such as “personal hygiene” or “sadness,” have various definitions, each of which can lead to the development of different assessment instruments to measure the same underlying concept.
Constructs
As discussed in Chapter 6, constructs are theoretical creations based on observations but cannot be observed directly or indirectly.3 A construct can only be inferred and may represent a larger category with two or more concepts or constructs.
Definitions
The two basic types of definition relevant to research design are conceptual definitions and operational definitions. A conceptual definition, or lexical definition, stipulates the meaning of a concept or construct with other concepts or constructs. An operational definition stipulates the meaning by specifying how the concept is observed or experienced. Operational definitions “define things by what they do.”
Variables
A variable is a concept or construct to which numerical values are assigned. By definition, a variable must have more than one value even if the investigator is interested in only one condition.
There are three basic types of variables: independent, intervening, and dependent. An independent variable “is the presumed cause of the dependent variable, the presumed effect.”4 Thus, a dependent variable (also referred to as “outcome” and “criterion”) refers to the phenomenon that the investigator seeks to understand, explain, or predict. The independent variable almost always precedes the dependent variable and may have a potential influence on it. The dependent variable is also referred to as the “predictor variable.” An intervening variable (also called a “confounding” or an “extraneous” variable) is a phenomenon that has an effect on the study variables but that may or may not be the object of the study.
Investigators treat intervening variables differently depending on the research question. For example, an investigator may only be interested in examining the relationship between the independent and dependent variables and may thus statistically control or account for an intervening variable. The investigator would then examine the relationship after statistically “removing” the effect of one or more potential intervening variables. However, the investigator may want to examine the effect of an intervening variable on the relationship between the independent and dependent variables. For this question, the researcher would employ statistical techniques to determine the interrelationships.
Hypotheses
A hypothesis is defined as a testable statement that indicates what the researcher expects to find, given the theory and level of knowledge in the literature. A hypothesis is stated in such a way that it will either be verified or falsified by the research process. The researcher can develop either a directional or a nondirectional hypothesis (see Chapter 7). In a directional hypothesis, the researcher indicates whether she or he expects to find a positive relationship or an inverse relationship between two or more variables. A positive relationship is one in which both variables increase and decrease together to a greater or lesser degree.
In an inverse relationship, the variables are associated in opposite directions (i.e., as one increases, the other decreases). An inverse relationship may involve a hypothesis similar to the following:
In this statement, the expectation is that as the variable “employment” increases, difficulty in self-care will decrease.
Now that we have defined nine key terms that are essential to experimental-type designs, let us review the way they are actually used in research. Experimental-type research questions narrow the scope of the inquiry to specific concepts, constructs, or both. These concepts are then defined through the literature and are operationalized into variables that will be investigated descriptively, relationally, or predictively. The hypothesis establishes an equation or structure by which independent and dependent variables are examined and tested.
Plan of Design
The plan of an experimental-type design requires a set of thinking processes in which the researcher considers five core issues: bias, manipulation, control, validity, and reliability.
Bias
Bias is defined as the potential unintended or unavoidable effect on study outcomes. When bias is present and unaccounted for, the investigator may not be able to fully understand whether the study findings are accurate or reflect sources of bias and thus may misinterpret the results. Many factors can cause bias in a study (Box 8-2). We already discussed one source of bias, the “intervening variable.”
Another source is instrumentation. This involves the ways in which data are obtained in experimental-type approaches. The two major sources of bias from instrumentation are inappropriate data collection procedures and inadequate questions.
In the situation portrayed above, the procedures for data collection are problematic and introduce bias into the study.
Interview questions that elicit a socially correct response and questions that are vague, unclear, or ambiguous introduce a source of bias into the study design.
Sampling is another major source of bias.