Janice M. Morse
THE MYTH OF A THEORY BASE
Theories, derived from qualitative research, languish in libraries and in journals, separated from patient care by the infamous research-practice.
—Morse (2016, Chapter 1, p. 1)
I begin this chapter with a complaint from Chapter 1 (p. 1). Why do our theories languish, unadmired, after we have gone to so much trouble? Only last week an editor, who was accepting one of our articles, wrote:
The Implications section is also in need of further clarification and enlargement. While you suggest that empathy should be withheld by health care providers? This is counter intuitive and needs further exploration. Most of our readers are going to be shocked by this assertion – and I think that the succinct discussion of this is in your manuscript does not do the assertion any justice.
Dear Editor: This is old knowledge that should be known: Publishing is not enough to ensure dissemination. I wrote it in two articles in 1992. The first, Morse et al. (1992), was published in a U.S. internationally distributed journal [Journal of Nursing Scholarship] and reprinted in French in a second journal; and the second article (Morse, Bottorff, Anderson, O’Brien, & Solberg, 1992) was published on the other side of the Atlantic, in the Journal of Advanced Nursing (JAN), and in 2006, got a second chance when it was reprinted in the JAN’s 30th Anniversary Issues as the “Best of ….”
According to Google Scholar (December, 2015), the first article has been cited 176 times, the second has been cited 144 times, and the 2006 reprint 69 times. But apparently that is not enough to move it generally into practice.
It seems that there is a vast amount of knowledge “out there”—more than we can ever absorb. Whereas we used to complain that clinicians did not read, now it seems that it is not possible even for academics (or even for students, who read all the time) to keep up to date. We have far too much research for traditional ways of dissemination to actually disseminate it. Publishing in a “good” journal (high impact, targeted to the right audience) is not going to be adequate to make a noticeable impact.
How much evidence is needed?
Why is it that the public seems to already know about empathy, that which we have not yet articulated formally in our practice? Even in comic pages we find examples of the nontherapeutic use of empathic touch (Figure 39.1).
I am not certain that I have the answer. But, in this chapter, we consider the important subject of dissemination, along with “worth.” In this volume (Chapter 23) we considered the question of proof, of knowing what and how we know, so we can decide if the materials we are researching should be prioritized for practice. But there is still a problem if our solid and useful work is still not making the bedside. In this chapter, we consider why our present system of researchers “presenting and publishing” is not working, and why our models of dissemination in high impact journals have virtually no clinical impact.
DISSEMINATION OR DISSIPATION?
Green (2001) noted that for articles in public health this process took 17 years before a change was adopted into practice. This time frame in itself is interesting, as you and I know that students are taught not to search more than 10 years back in the indexes (and some say only 5 years). A minor problem here is that such instructions come from borrowing the values of medicine and bench science (where knowledge is incremental and published mainly in journals) to social sciences (where knowledge expands laterally, does not “expire,” and is often to be found in longer monographs). Things of interest to social scientists (e.g., interactions and social behaviors) are similar nowadays to those 35 years ago, so let us read more Goffman. If we believe that anything published more than 10 years ago is of no use, then we, as a discipline, will be in great trouble, reinventing the wheel and, like busy hamsters, running nowhere. Worse, we will be ignoring and devaluing the contributions of those who came before.
Now, reconsider the numbers of articles published in 2014. Of course, these were not equally “valuable” or “useful.” Which ones were more useful or needed, either clinically or for research or education? Which ones made a greater contribution? Were they data-based (and what kind of data?) or critical, or summaries, or theoretical? Did they address education, clinical issues, populations, health or illness, care practices, evaluation, policy, or ethics?
We have to select what we read, and what we research, and what we cite.
Some publications “take off” and are highly cited, whereas others dissipate and remain unnoticed and forgotten. And, as an editor I cannot tell which ones will make a difference, nor predict how influential an article will be.
THE PROBLEM OF DISSEMINATION
Why Dissemination Fails
We drill into our students that we should disseminate by publishing articles in high impact journals (publish “once, for goodness sake”; “do not plagiarize yourself” we are tell them), and wait for the miraculous adoption processes of dissemination to occur. Other researchers may be excited to read your insights, and immediately apply for grants to build on your work. Some may cite you, skeletal synopses may appear in clinical journals for those too busy to read the original, your work may take on a distorted life of its own on the Internet, and be tweeted, and students may immediately pick up your work. Unfortunately, this “picking up” is not likely.
Why is this unrealistic? How many articles appeared in nursing journals in 2015?
Why Ideas Fail (Mehta, 2014, pp. 26–29) claims that innovators do not attend to the human factors inherent in change. Whereas his list was developed for innovations in engineering, I apply his list to the attempted changes as we publish ideas in nursing.
We must ask ourselves several hard questions about the proposed change.
• Is it desirable? Change, and changing the way one does something, is simply a bother, a “pain in the neck.” Habit is comfortable. Consider the difficulties infection control nurses have in getting people to wash their hands. Thus, to have one’s ideas accepted, those who are adopting the change must really want to do it, whatever “it” is.
• Does it meet every need? It is possible that the recommended innovation is not perceived to be a good fit for every clinical situation or client’s every need in the new setting.
• Is it pretty? The innovation has to be desired, attractive, and appealing not only to clinicians, but also to patients.
• Who is going to “champion” it? A change must have a champion: someone prepared to teach, to incorporate it into the policy manual, and to supervise and evaluate.
• Is it socially acceptable? The innovation must fit into the regulations regarding patient care, such as privacy regulations, dignity and respect, as well as Food and Drug Administration Guidelines, and other such regulations.
• Is it feasible? It will not be a feasible intervention if it increases patient caseload or has a negative impact on the time of patient care.
• Is it a strong or powerful enough intervention to make a difference? I call this the “dose theory of dissemination.” Borrowing a term from experimental design, “dose” means to ensure that the intervention is adequate to make a measurable difference when evaluated. The intervention must be adequately strong to make a difference, or the intervention may be trialed or piloted, inaccurately demonstrating that the intervention makes no difference. This may be particularly problematic for behavioral interventions, which are often difficult to describe in writing, to measure with the prerequisite rigor, and are easier to model.
• Is it visible? All who should be interested in the intervention must be able to see the article when it is published. My computer says that there are 23,232 peer-reviewed journals; 43,353 full text online journals; and 193 available in my library (and 104 nursing journals now listed on the ISI Web of Knowledge). This volume is problematic, as researchers are discouraged from “double publishing,” and indeed, if we were to adopt such a habit, it would increase the volume of new research and subsequently increase the problem of being swamped with literature. Yet, we know from advertising theory that repeated exposure leads to more effective retention. It is probable that one 15-page publication is not enough to grab the attention of an adequate number of clinicians to implement a finding, and we do not know what those attention-grabbing factors are. One possible solution is the publication of synopses, as published by Evidence Based Nursing. Or else “do more” meta-analyses, meta-syntheses, and work on theoretical coalescence.
• Are the instructions adequate? Maybe the descriptions in the 15- page article are too thin/scant to describe the intervention properly. Frequently, researchers “split” their findings into several articles to overcome this problem. While this gives researchers additional “publication credit,” it poses the risk that readers may not be aware of the other related articles in the set, which are now separated and published in other journals. And their theory loses its coherence. Thus knowledge is scattered, and implementation is more difficult. So this fragmentation probably weakens the impact of the research.
• Do you really have a problem? Sometimes, change is recommended inappropriately: Something that is suitable and beneficial for one group of patients, is sometimes recommended for all, and sometimes the problem is not serious enough or large enough to warrant the effort of making a change.
Perhaps these questions should become review criteria for those who are responsible for implementing change in clinical practice.
TO DO 39.1. DISCUSS THE IMPORTANCE OF REPLICATION IN QUANTITATIVE RESEARCH
In relation to accepting, altering, or rejecting the theory, concepts, and operationalization of those concepts in the original study. Why must the theoretical foundation of the research be constantly re-examined? Think of examples in which theoretical foundation has been faulty, and jeopardized in subsequent research.
Dissemination Through Education
The basic problem is that researchers have no control over the acceptance and use of their products. But perhaps there is one other thing to try.
Dissemination through education appears to be a promising, albeit slow way to implement change. The idea is to ensure the content from research is included in basic nursing texts, or in this case, even more advanced texts, so that students read and learn the innovations as a part of their basic education. Later, when they graduate and move into practice, they may take the innovative practices and models of care with them.
Wearing my other hat, I study patient falls. This research has been going on since about 1990, is applied and useful, yet I am stunned at the quality of information, or even its absence, in basic texts about patient falls. The Morse Fall Scale (MFS) has taken on a life of its own in the Internet, has become distorted, invalidated, popularized, and trivialized (Morse, 2006). Something is wrong with this picture.
Dissemination Using Policy Change
To dictate change through policy change is the most effective way to bring about such change. Major behavioral changes, such as banning smoking and the mandatory use of seatbelts and child seats, have been achieved through legislation, combined with penalties for noncompliance. When change is simply recommended or suggested, it is ineffective, as I am reminded every time I see a motorcyclist, zooming past my car, with hair blowing in the breeze. Helmets for motorcyclists are not regulated in the state of Utah; policy and legislation override evidence.
Another problem is that many minor changes are implemented in individual hospitals through policy manuals, rather than through research channels, and recommending and formalizing the change in a journal article. Perhaps the evidence-based movement will correct this problem, although it introduces another problem—what exactly is evidence?1
How Will the Findings Be Implemented
If implementing a technical procedure is hard, implementing theory or theoretical finding is doubly difficult. By implementing our research, we are asking people to actually think differently. I have argued that they must have the new conceptualization not only accessible, but have time to read, comprehend, and understand; and most of all, a desire to do.
But there is another problem: a theory per se is not a tangible product. Although it may provide understanding, that understanding may or may not be translated into action. The theory may or may not change care. Is it a visible? Measurable?
In this volume, we have tried to suggest ways that theory may be converted into a tangible product. Developing such products as assessment guides means additional responsibility for the researchers; they must take the research one step further, beyond the stage that the project is usually finished. But even so, it still involves waiting to see if the research is noticed and implemented once the work is “done.”
The second approach is for the researcher, or another research team, to develop and implement an applied research project—actually developing a project that will test the efficacy of the change. Such a project will bring the research one step closer to more general use. The results of the program testing the implementation will be published this time in a clinical or specialty journal, bringing them again to the attention of clinicians.
The third approach is for the hospital or clinic to adopt the changes as a procedure, even arranging for re-evaluation. This would be a wonderful start, but remember, again, it is local utilization, the recalculation of validity of the MFS is a futile practice given the fall interventions in place (see Morse, 2006), and is still a long way from general adoption into practice.2
How Theory Actually Disseminates
Elsewhere (Morse, 2012) I noted that knowledge develops slowly and in clusters. It is rare that a seminal study leapfrogs to a position of influence and remains as a milestone, as for instance, the introduction of empathy into nursing from Carl Rogers’s (1956) address to the American Nurses Association (ANA) convention in 1956. If a study does have impact, I am not certain why one particular study is constantly cited, rather than another. The fact remains, that no matter how important you think your dissertation is, once it is published, it will probably make its mark by adding strength incrementally to a number of similar studies conducted by other authors, rather than being cited singly as a seminal study.
This is generally how all knowledge develops, although sometimes a cluster of studies may go “off course” and need to be corrected. Most studies fall into a black hole or die in mediocrity, ignored or forgotten. Look at the number of times your articles have been cited—a humbling task—and this is usually an indicator of use by other researchers, rather than by clinicians.
TO DO 39.2. HOW MANY ARTICLES WERE PUBLISHED IN NURSING JOURNALS IN 2015?
Look it up and let me know.
How? Use the Thompson Reuters “Web of Knowledge” for Nursing. Add the Numbers of articles column. Nice number, eh?
Discuss publication statistics. What is an impact factor? How do you identify a journal you would like to publish in?
Now think about this. How can we possibly integrate and use this knowledge effectively and efficiently. Who should be responsible?
Nevertheless, the course of knowledge adoptions for qualitative and quantitative research is different. First, we examine the course of qualitative theory or knowledge.
The Accrual of Qualitative Knowledge
There are eight phases, or levels of development, of qualitative inquiry (Morse, 2012), from the first exploratory studies of the phenomenon to the implementation of an intervention (see Table 39.1). Most qualitative health research studies are conducted at Level 1; hence, the adage that “qualitative studies go nowhere.” This may be true for many studies, but if you consider these studies to be a part of a whole, then this “nowhere hypothesis” is not entirely correct—they do eventually contribute to the whole.
When examining Table 39.1, note that the studies at the various levels are not conducted in a stepwise, linear direction. An investigator may start at Level 2 and then, because he or she realizes that he or she does not have adequate description, conduct a Level 1 study; alternatively, he or she may start at Level 3. The important point is that rarely does a study include several levels at once, or include information for Levels 3, 4, and 5 in the same study.
Level 1: Identification of Significant Concepts
Level 1 studies explore phenomena that appear interesting, that need describing. They may target something that appears problematic, therapeutic, difficult, or simply interesting. The researcher may elect to use any qualitative method that provides thick description (hopefully the most appropriate according to the question and other factors): ethnography, grounded theory, phenomenology, narrative inquiry, and so forth. Their goal is primarily to see what is going on.
Although these studies are generally descriptive, data will be synthesized, and the results may include a minor, lower level theory. Some themes or concepts will have been identified. Some of these concepts may be common and obvious, and others will be new, named, described, and delineated. Hopefully the researcher will have linked his or her findings with the literature, but this, unfortunately, is an uncommon occurrence.
Level 2: Delineation and Description of the Anatomy of the Concept
At Level 2, the researcher has identified the major concept of interest, and the purpose of inquiry is to develop it further. The concept may be a lay concept (one that is used in everyday language, such as dignity, privacy, care, or suffering) or a scientific concept (one that has been identified in the process of doing research and defined operationally). The researcher uses methods according to what is known about the concept and its level of maturity (Morse, Mitcham, Hupcey, & Tasón, 1996), which range from the methods of concept analysis, such as those suggested by Rodgers and Knafl (2000), to qualitative methods that enable description and delineation of the concept. From such an inquiry, the attributes (characteristics), boundaries, antecedents, and consequences are identified and delineated. Ideally, researchers can now agree on the nature of the concept, and the inquiry can move forward.
The Phases of Development for Qualitative Nursing Research, from Exploring a Phenomenon to Application.
CONTRIBUTION TO KNOWLEDGE DEVELOPMENT
Identification of significant phenomena
Delineation and description of the anatomy of the concept
Examination of the concept in different situations
Exploring the relationship of the concept of interest with other co-occurring concepts
Model and theory development
Assessment and measurement
Clinical application and evaluation of outcomes
Source: Morse (2012). Reprinted with permission of Sage Publications.
Level 3: Examination of the Concept in Different Situations
Researchers conducting studies at this level commence their study by targeting the concept, rather than the phenomenon, and waiting for the concept to emerge inductively in the process of data collection. They therefore seek to study caregiving, bereavement, resilience, or whatever, in a certain group or situation. The selected concept provides the terms for the literature search, whereas the context of the study is of secondary importance. The researchers are using the information about the concept—what is known about its boundaries, attributes, antecedents, and consequences—to plan ahead. In their study, they will be looking for differences in the form of the concept according to characteristics of the participants (gender, ethnicity, and so forth) in individual or family groups. The form of the concept is important for qualitative health research, perhaps changing in participants with different medical conditions, symptoms, or health problems. Conditions that we frequently see investigated recently are participants with HIV/AIDS, sexually transmitted diseases, chronic conditions such as diabetes and arthritis, and spinal cord injuries.
These studies provide important information on two levels. First, attributes that remain consistent in the concept in various situations are probably true attributes, a part of the concept. Second, those attributes that change in strength, but remain in the concept, and are altered because of context, provide significant information as the research moves toward constructing theory.
Level 4: Exploring the Relationship of the Concept With Other Co-Occurring Concepts
Studies become more complex—and more representative of reality—when the researcher examines more than one concept in a single setting. The researcher must ask: Are these concepts independent? Do they co-occur? How do the interactions between two concepts change when they merge or they separate? Do they run parallel or intersect? And, when examining concepts in another culture, is the concept altered or changed, and if so how? Are the attributes the same or different? These questions are important, for if the concept changes form, it is not a culturally universal concept. Further, as the concept responds to conditions within the context, new forms of the concept may appear. For instance, in the previously discussed context of hope in the heart transplant unit, where participants have only one chance at a transplant (and the alternative is death), the form of hope becomes “hoping for a chance for a chance” (Morse & Doberneck, 1995). The stakes are very high, and those in this situation are very aware that they are hoping against hope.
Level 5: Synthesizing Knowledge
By this time, there have been many studies conducted on the concept, and it is moving toward maturity. Once the concept has been described in many qualitative studies in various contexts, researchers should consider conducting a meta-synthesis. Researchers have enough data (i.e., studies of the various types described in the previous chapters), from enough perspectives, to determine that the conceptual attributes (or characteristics) will be present in every case. Because we know that variations in the strength or the role of the attributes give rise to different forms of the concept, it should be relatively easy at this point to identify the types or forms of the concept(s) of interest. The outcomes of meta-synthesis, then, should be a higher level of abstraction and with consensus on labels and definitions for the concept.
Level 6: Model and Theory Development
Qualitative researchers begin model building by examining and identifying internal processes and mechanisms and their linkages. This may take place within a project in the course of data collection or by using the literature as data, extending from a meta-analysis.
At this point, concepts are more than labels to a qualitative researcher. We are interested in the strength and interactions of the attributes, and what the concept does therapeutically. As collections of behaviors, concepts are not static, but change their form to fit a particular situation. Exploring the process of development and utilization of the concepts, researchers explore the internal mechanisms and the interrelationships of the attributes, usually using grounded theory. Researchers examine studies exploring how the concept is used—how it is manipulated and supported by nurses and physicians in the provision of care, by relatives in interactions or lay caregiving, and by the patient within his or her cultural context—thereby leading to solid, mid-range theory.
This process may be extended to explore how the concept links with other concepts and how attributes and the boundary of the concept link as shared attributes of another concept. Concepts are “opened,” and the common or shared characteristics are where the two concepts join. We study how concepts change during trajectories (i.e., process), and how the strength and ordering of attributes change as, for instance, the researcher moves through the process, focusing on the concepts and hence developing an emerging theory.
Level 7: Assessment and Measurement
By this stage, quantitative researchers may have become interested in the concept, developing questionnaires and instruments to measure the concept to determine its prevalence in the population epidemiologically, and designing quantitative experiments to test emerging hypotheses. The concept will find its way into conceptual frameworks, or itself be considered a conceptual framework, and will be “opened” so that its attributes form the framework.
Level 8: Clinical Application and Evaluation of Outcomes
Ever since the advent of evidence-based medicine, clinical application and utilization is becoming a formal research task in which clinical problems are identified and appropriate research-based interventions, selected to solve the particular problem, are deliberately applied and evaluated. Then, if they pass this particular test, they are adopted into practice. Interestingly, this process of evaluation tends to be done on a case-by-case, hospital-by-hospital, or unit-by-unit basis, rather than in the coordinated and funded manner of medicine.
To summarize, decades of work by many research teams working relatively independently form a foundation from which concepts are identified, theories are developed and generalized, and insights and interventions are developed to improve practice. However, that process of conducting research and building knowledge is not sequential, but rather haphazard. Researchers follow their own research programs, their own disciplinary agendas, and their personal interests, and they respond to funding calls and clinical opportunities. And as qualitative researchers, we teach our students to be skeptical of the work of others and to value and prioritize principles of induction. All of this slows down the progression of knowledge development.
The Accrual of Quantitative Research
First, remember that the products of qualitative and quantitative research are quite different. The most usual outcome of qualitative research is theory; the outcome of quantitative research is usually a product: a scale or an instrument; population-based data that may be used as rates or norms, efficacy information on the improvement of treatments or therapies or programs, and so forth. Quantitative theory is hidden beneath these products, but is crucial for guiding what they become, and ultimately how they inform care.
Recall that quantitative research uses scientific concepts, which are created according to the needs of a particular research program, and these concepts are operationalized according to the limitations of available measures. For instance, coping (Lazarus, 1966) and social support (Cobb, 1976) were scientific concepts developed by quantitative researchers and were operationally defined to meet the needs of subsequent research. Scientific concepts are rarely introduced into the literature without first being brought to the level of research. However, in the next phase, quantitative frameworks and theories are sometimes developed and published—perhaps because they are commonsense frameworks for organizing data, or perhaps because they have a mathematical basis, such as Einstein’s theory of relativity.
One major difference with quantitative research, especially in experimental design, is in the significance of replication: Research should be replicated by the investigator and also by others; methods must be transparent and clear, and results are available to all. Quantitative research moves forward in a stepwise fashion, replicating the research in different populations, extending the research intervention, and modifying and strengthening the model (as shown on Table 39.2).