Critical evaluation of published research

23


Critical evaluation of published research




Introduction


By the time a research report is published in a peer-reviewed journal, it has been critically reviewed by experts. Usually, changes have been made to the initial draft by the author(s) responding to the reviewers’ comments. Nevertheless, even a thorough evaluation procedure doesn’t guarantee the validity of the design or the conclusions presented in a published paper. Ultimately, you as a health professional must be responsible for judging the validity and relevance of published material for your own clinical activities (which is why you are studying health sciences). Evidence-based practice focuses on the ways in which practitioners can improve their clinical practice by using research evidence. The systematic review processes employed by bodies such as the Cochrane Collaboration are intended to assist clinicians in the selection of interventions that are well proven and safe (see Ch. 24).


The proper attitude to take with published material, including systematic reviews, is one of hard-nosed scepticism, whatever the status of the publication. This attitude is based on our understanding of the uncertain and provisional nature of scientific and professional knowledge, as outlined in Chapter 1. In addition, health researchers deal with the investigation of complex phenomena, where it is often impossible for ethical reasons to exercise the desired levels of control or to collect crucial information required to arrive at definitive conclusions. The conduct of a valid critique requires that we compare the methods used by the researchers with the rules of evidence in the context of a research project. Critical evaluation identifies the strengths and weaknesses of a research publication, to ensure that patients receive assessment and treatment based on the best available evidence.


The aim of this chapter is to discuss the evaluation of published research. The chapter is organized around the evaluation of the specific sections of research publications.


The specific aims of this chapter are to:




Guidelines for critical appraisal of research publications


Many textbooks on health research methodology or evidence-based health care provide guidelines for the critical appraisal of published research. For example, Hicks (2009) suggested a list of questions relevant for conducting rigorous appraisals and provided a detailed sample critique of a health research publication. Critical appraisal is an essential step in conducting evidence-based health care (see Ch. 24). Consequently, textbooks in this area devote chapters to explaining critical evaluations. For example, Dawes et al (2005) provide detailed guidelines for critiquing publications based on different types of designs, including randomized controlled trials (RCTs), case-control and cohort studies and systematic reviews.


There are also numerous websites hosted by reputable organizations for facilitating the appraisal of published evidence. A useful online example is the University of South Australia Division of Health Sciences International Centre for Allied Health (http://www.unisa.edu.au/cahe/resources/cat/default.asp) which is dedicated to the critical evaluation of published research.


Some groups created critical appraisal schemes which attempt to produce numerical scores to represent methodological quality. For example, the Downs and Black quality index generates an overall score out of 21 to represent the quality of a publication. Publications with low scores may be excluded from systematic reviews or attributed with poor credibility. While such quantitative approaches are followed by some reviewers to identify risk of bias (e.g. Soares-Weiser et al 2010; see Ch. 4) it is worth noting that the critical appraisal of published research is not easily qualified. Rather, health researchers or practitioners need to make critical judgements based on their personal knowledge, experience and work contexts.


Table 23.1 summarizes some of the potential problems, and their implications, which might emerge in the context-critical evaluation of an investigation. A point which must be kept in mind is that, even where an investigation is flawed, useful knowledge might be drawn from it. The aim of critical analysis is not to discredit or tear down published work, but to ensure that the reader understands its implications and limitations with respect to theory and practice. In this chapter we will focus on the constructive, critical evaluation of each stage of the research process as identified in Chapter 3.



Table 23.1


Checklist for evaluating published research














































































Problems which might be identified Possible implications in a research article
1. Inadequate literature review Misrepresentation of the conceptual basis for the research
2. Vague aims or hypotheses Research might lack direction; interpretation of evidence might be ambiguous
3. Inappropriate research strategy Findings might not be relevant to the problem being investigated
4. Inappropriate variables selected Measurements might not be related to concepts being investigated
5. Inadequate sampling method Sample might be biased; investigation could lack external validity
6. Inadequate sample size Sample might be biased; statistical analysis might lack power
7. Inadequate description of sample Application of findings to specific groups or individuals might be difficult
8. Instruments lack validity or reliability Findings might represent measurement errors
9. Inadequate design Investigation might lack internal validity; i.e. outcomes might be due to uncontrolled extraneous variables
10. Lack of adequate control groups Investigation might lack internal validity; size of the effect difficult to estimate
11. Biased participant assignment Investigation might lack internal validity
12. Variations or lack of control of treatment parameters Investigation might lack internal validity
13. Observer bias not controlled (Rosenthal effects) Investigation might lack internal and external validity
14. Participant expectations not controlled (Hawthorne effects) Investigation might lack internal and external validity
15. Research carried out in inappropriate setting Investigation might lack ecological validity
16. Confounding of times at which observations and treatments are carried out Possible series effects; investigation might lack internal validity
17. Inadequate presentation of descriptive statistics The nature of the empirical findings might not be comprehensible
18. Inappropriate statistics used to describe and/or analyse data Distortion of the decision process; false inferences might be drawn
19. Erroneous calculation of statistics False inferences might be drawn
20. Drawing incorrect inferences from the data analysis (e.g. type II error) False conclusions might be made concerning the outcome of an investigation
21. Protocol deviations Investigation might lack external or internal validity
22. Over-generalization of findings External validity might be threatened
23. Confusing statistical and clinical significance Treatments lacking clinical usefulness might be encouraged
24. Findings not logically related to previous research findings Theoretical significance of the investigation remains doubtful

Stay updated, free articles. Join our Telegram channel

Apr 12, 2017 | Posted by in MEDICAL ASSISSTANT | Comments Off on Critical evaluation of published research

Full access? Get Clinical Tree

Get Clinical Tree app for offline access