Politics and Evidence-Based Practice and Policy




Politics and Evidence-Based Practice and Policy



Sean P. Clarke



“The union of the political and scientific estates is not like a partnership, but a marriage. It will not be improved if the two become like each other, but only if they respect each other’s quite different needs and purposes. No great harm is done if in the meantime they quarrel a bit.”


—Don K. Price


Health care has been a conservative field characterized by deep investments in tradition. Evolution of treatment approaches and facility and service management often has been very gradual, punctuated by occasional breakthroughs. For many years, it was said that nearly 2 decades could pass between the appearance of research findings and their uptake into practice. While this is a statement that bears revisiting in the era of evidence-based practice and in the Internet age, disconnects between evidence and care practices are still common, as are inconsistencies in practice and variations in patient outcomes across providers and institutions. It is clear that bringing research findings to “real world” settings remains a slow and uneven process.


Clinicians, researchers, and policymakers are well aware of poor uptake of research evidence and lost opportunities to improve service, spurring an interest in clinical practice and more recently, health care policy, driven by high-quality scientific evidence. An often-cited definition of evidence-based practice is “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett et al., 1996). Evidence-based policy is an extension or extrapolation of the tenets of evidence-based practice to decisions about resource allocation and regulation by various governmental and regulatory bodies. Awareness of the scale of investments in both health and social service programs and on research around the world, the enormous stakes of providers and clients in the outcomes of policy decisions, and increasing demands for transparency and accountability have influenced its rise:



Evidence-based policy has been defined as an approach that “helps people make well informed decisions about policies, programmes and projects by putting the best available evidence from research at the heart of policy development and implementation” (Davies, 1999). This approach stands in contrast to opinion-based policy, which relies heavily on either the selective use of evidence (e.g., on single studies irrespective of quality) or on the untested views of individuals or groups, often inspired by ideological standpoints, prejudices, or speculative conjecture (Davies, 2004, p. 3)


Controversies in clinical care and policy development are sometimes very strong. Furthermore, political forces can greatly influence the types of research evidence generated, whose hands evidence works its way into, how it is interpreted in the context of other data and values, and, most significantly, how it is used (if at all) in influencing practice. This chapter will review the politics of translating research into evidence-based practice and policy, from the generation of knowledge to its synthesis and translation.


The Players and Their Stakes


Translating research into practice involves many stakeholder groups. Health care professionals are often directly influenced by practice changes based on evidence. Many are quite invested in particular clinical methods or work practices and structures of practice, or, put otherwise, in the status quo in terms of treatment approaches they use and the way their care is organized. They often have preferences, pet projects, and passions and may even have visions for health care and their profession’s role that might be advanced or dashed by change. There can be issues of protecting working conditions, as well as turf issues with other professions, notably the protection of services or programs that are lucrative for particular professions.


There are often direct financial consequences for industries connected with health care when research drives adoption, continued use, or rejection of specific products, such as pharmaceuticals and both consumable (e.g., dressings) and durable (e.g., hospital beds, information technology) medical supplies, but also less visible (but equally expensive and important) products, such as consulting services.


Managers, administrators, and ultimately policymakers have stakes in delivering services in their facilities or organizations or jurisdictions in certain ways or within specific cost parameters. In general, administrators would prefer to have as few constraints as possible in managing health care services and thus may be less than enthusiastic about regulations as a method of controlling practice; however, changes that increase available resources may be better accepted.


For researchers, wide uptake of findings into practice is one of the most prestigious forms of external recognition, particularly if mandated by some sort of high-impact policy or legislation. This is especially the case for researchers working in policy-relevant fields where funding and public profile are mutually reinforcing. Researchers and academics involved in the larger evidence-based practice movement also have stakes in the enterprise. There are researchers, university faculty, and other experts who have become specialists in synthesizing and reporting outcomes and have interests in ensuring that distilled research in particular forms retains high status. Furthermore, funding agency advisers and bureaucrats may also be very much invested in the legitimacy conferred by the use of evidence-based practice processes.


The general public, especially subgroups that have stakes in specific types of health care, wants safe, effective, and responsive health care. They want to feel as if their personal risks, costs, and uncertainties are minimized, and they may or may not have insights or concerns about broader societal and economic consequences of treatments or models of care delivery. Expert opinions and research findings tend to carry authority, but for the public, these are filtered through the media, including Internet outlets.


Elected politicians and bureaucrats want to maintain appearances of being well-informed and responsive to the needs of the public and interest groups, while conveying that their decisions balance risks, benefits, and the interests of various stakeholder groups. Elected politicians are usually concerned about voter satisfaction and their prospects for reelection. They, like the public, receive research evidence filtered through others, sometimes by the media but often by various types of civil servants. Non-elected bureaucrats inform politicians, manage specialized programs, and implement policies on a day-to-day basis. They may be highly trained and come to be quite well-informed about research evidence in particular fields. And top bureaucrats serve at the pleasure of elected officials, so are sensitive to public perceptions, opinions, and preferences.


The Role of Politics in Generating Evidence


Health care research is often a time- and cost-intensive activity involving competition for scarce resources and rewards. Much is on the line for many stakeholders. What projects are attempted, what is generated, and what is reported from completed studies are all very much affected by political factors at multiple levels.


Much research likely to influence practice or policy requires financial support from outside institutions. Researchers write applications to funders for grants to pay for the resources required to carry out their work. Before agreeing to underwrite projects, external funders must believe that a topic being researched is important and relevant to the funding mission, the research approach is viable, and the proposed research team is able to carry out the project as proposed. Funders are often governmental or quasi-governmental agencies, but research can also be subsidized by producers or marketers of specific products or services. When research is supported by suppliers of particular medications, products, or services, funders may have overtly-stated or implicit interests in the results of the studies, and researchers may face pressures around the framing of questions, research approaches, and how, where, and when findings are disseminated. Only recently has the full extent of potential conflicts of interest related to industry-researcher partnerships come to light. However, not-for-profit and government agencies funding also have stakes and preferences in what types of projects are funded as well, and their decisions are also influenced by public relations and political considerations.


Researchers must also please their employers with evidence of their productivity (e.g., successful research grants and high-profile publications). Not surprisingly, researchers choose to pursue certain types of projects over others and gravitate toward topics they believe will help them secure funding. They may also defend or try to increase the profile of their particular approaches or topics through their influences as reviewers or members of editorial boards for journals, grant review committees, and appointments to positions of real or symbolic power. There can be a great disincentive to move away from research and research approaches that have garnered support and recognition in the past. Nonetheless, research topics and approaches go in and out of style over time—subjects become especially relevant or capture the public’s or professionals’ imaginations and then often fade. As a result, academic departments, funding bodies, institutions, and dissemination venues become the locales where specific tastes and priorities emerge or disappear. This also applies to methodologies and theoretical stances within research fields.


Some subject matter areas or theoretical stances for framing subjects are so inherently controversial that securing funding and carrying out data collections is extremely challenging. Anything touching on reproductive health or sexual behavior tends to be potentially volatile, especially in a conservative political climate, but the questioning of the effectiveness or cost-benefit ratio of a health service much beloved by providers, the public, or both as potentially wasteful can also encounter a good deal of resistance.


Comparative Effectiveness Studies


Research that compares the effectiveness of different clinical approaches or different approaches to managing services is of course the most relevant for shaping practice and making policy. However, there are a variety of reasons comparative effectiveness research is difficult to carry out. To get access to health care settings and to ethically conduct studies exposing patients or communities to different approaches, a freely acknowledged state of uncertainty regarding the superiority of one approach over another is needed. In order to conduct meaningful research, the interventions or approaches in question need to be sufficiently standardized and researchers must be able to rigorously measure outcomes across sufficient numbers of patients across enough clinical settings. On the whole, comparative intervention research is complicated, demanding, and expensive work to carry out. It is also likely to plunge researchers into debates that can be quite politically sensitive. Thus, it may not be surprising that, because of the practical challenges and political pitfalls involved in evaluating or testing interventions, many researchers in health care are engaged in research intended mainly to inform understandings of health-related phenomena that will enable the design of interventions that are likely to work. Unfortunately, history has shown that many widely accepted treatments have been shown to be ineffective and needlessly increase both health care costs and risks to the public when careful evaluations are carried out. Funding for comparative effectiveness research, which many hope will stimulate this essential type of inquiry, is included in the Patient Protection and Affordable Care Act of 2010.


The Politics of Research Application in Clinical Practice


Individual Studies


To stand any chance of influencing practice or policy, findings must be disseminated and read by those in a position to make or influence either clinical or policy decisions. Individual research papers may or may not receive much attention depending on timeliness of the topic, whether or not findings are novel, the profile of the researchers, and the prestige of a journal or conference where results are presented.


Of course, a key principle of evidence-based practice and policy is that one study alone never establishes anything as incontrovertible fact. In theory, if not always in practice, single studies are given limited credence until their findings are replicated. Despite evidence that dramatic findings in “landmark” studies, especially using non-randomized or observational research designs, are rarely replicated under more rigorous scrutiny (Ioannidis, 2005), there is still often an appetite for novel findings and a drive to act on them. As a result, single studies, particularly ones with findings that resonate strongly with one or more interest groups, can receive a great deal of attention and even influence health policy, even though their findings are preliminary.


Journalists must find the most newsworthy of the findings in research reports and make them understandable and entertaining to their audiences. In contrast, for scientists, legitimacy hinges on integrity in reporting findings. Use of simplistic language or terminology or the reworking of complex scientific ideas into laymen’s terms in the popular press may result in broad statements unjustified by the findings at hand. Being seen as a “media darling,” especially one whose work is popularized without careful qualifiers, can be damaging to a researcher’s scientific credibility. Furthermore, given that reactions and responses (and backlashes) can be very strong, researchers seeking media coverage of their research must be cautious. It is generally best to avoid popular press coverage of one’s results before review by peers and publication in a venue aimed at research audiences. In addition, avoiding overstating results and ensuring that key limitations of study findings are clearly described is essential, particularly if a treatment or approach has been studied in a narrow population or context or without controlling for important background variables.


Summarizing Literature and the Politics of Guidelines and Syntheses


Despite the appeal of single studies with intriguing results, the principles of evidence-based practice and policy dictate that before action is taken, some type of synthesis of available research results be carried out. Studies with larger, more representative samples and tighter designs are granted more weight in such syntheses.


Conducting and writing systematic reviews and practice guidelines are labor-intensive exercises requiring skill in literature searching, abstracting key elements of relevant research, and comparisons of findings. The process is expensive and time-consuming, often requiring by investments stakeholder groups to ensure completion. The work is often conducted in teams to make it manageable and to increase the quality of the products, including perceptions of balance and fairness in the conclusions. The procedures used to identify relevant literature are now almost always described in detail to permit others to verify (and later update) the search strategy. It is worth noting that except in contexts such as the Cochrane Collaboration (where all procedures are extremely clearly laid out and designed to be as bias-free as possible), the grading of evidence and the drafting of syntheses are often somewhat subjective and reflect rating compromises.


Political forces will influence what topics, clienteles, or areas of science or practice are targeted for synthesis or guidelines—often high-volume or high-cost services or services where clients are at high risk. Who compiles synthesis documents and under what circumstances will also reflect research and professional politics as well as influences from funders and policymakers. In the end, the credibility of syntheses or guidelines hinges on the scientific reputation of those responsible for writing and reviewing them. There is some debate regarding whether or not subject matter expertise is required of those conducting a synthesis and whether or not having conducted any research in an area creates a vested interest that can jeopardize integrity of a review. Interestingly, different individuals tend to be involved in conducting research versus carrying out reviews. Key investigators in the area may not want to take the time away from their research to be involved, but may feel a need to defend their studies or protect what they believe to be their interests. Often, recognized experts are brought in at the beginning or end of a search and synthesis exercise to ensure that relevant studies have not been omitted and that results of studies have been correctly interpreted.


Systematic reviews, disseminated by authoritative sources, can be especially influential on both clinical practice and health policy. When a treatment’s usefulness for recipients is brought into question or it is suggested that some diagnostics or treatments are superior to others, it is very likely that the creators, manufacturers, or researchers involved with the “losers” will bring their resources together to fight. In 1995, the Agency for Health Care Policy and Research (AHCPR), the federal entity that was the precursor of the AHRQ (Agency for Healthcare Research and Quality), released a practice guideline dealing with the treatment of lower back pain that stated spinal fusion surgery produced poor results (Gray, Gusmano, & Collins, 2003). Lobbyists for spinal surgeons were able to garner sympathy from politicians averse to continued funding for the agency. In the face of other political enemies and threats, the result was the threatened disbanding of the agency. AHCPR was reborn in 1999 as AHRQ, with a similar mandate to focus on “quality improvement and patient safety, outcomes and effectiveness of care, clinical practice and technology assessment, and health care organization and delivery systems” (www.ahrq.gov) but without practice guideline development in its portfolio.


Clearly, skepticism is warranted when reading literature syntheses involving the standing of a particular product or service that have either been directly funded by industry or interest groups or had close involvement by industry-sponsored researchers (Detsky, 2006). Guidelines and best practices to reduce bias in literature synthesis and guideline creation are now being circulated (IOM, 2009; Palda, Davis, & Goldman, 2007) in much the same way as parameters, checklists, and reporting requirements for randomized trials and observational research (e.g., the CONSORT guidelines at www.consort-statement.org) were first created and disseminated a number of years ago.


The Politics of Research Applied to Policy Formulation


Distilling research findings and crafting messages to allow research evidence to influence policy can be even more complex and daunting than translating research related to particular health care technologies or treatments. Direct evidence about the consequences of different policy actions is often sparse, and much extrapolation is necessary to link available evidence with the questions at hand. Nevertheless, attempts have been made in the U.S. and elsewhere, often through non-profit foundations such as the Robert Wood Johnson Foundation (www.rwjf.org) and the Canadian Health Services Research Foundation (www.chsrf.ca), to educate the public and policymakers about relevant health services research findings. The political challenges in implementing health policy change are also considerable. The amounts of money are often higher, and symbolic significance of the decisions is even greater, which makes conflict across the same types of stakeholder interests discussed throughout this chapter potentially even more dramatic. Box 42-1 shows “pearls and pitfalls” of using research in a policy context.


Mar 18, 2017 | Posted by in NURSING | Comments Off on Politics and Evidence-Based Practice and Policy

Full access? Get Clinical Tree

Get Clinical Tree app for offline access