Albany Medical Center
 Search
Home / Caring / Educating / Find a Doctor / News / Give Now / Careers / About / Calendar / Directions / Contact
October 5, 2012 | Posted By John Kaplan, PhD

It seems today that the popular press is replete with stories telling us that scientific findings which were widely believed based on earlier studies were untrue, that conclusions unsupported by previous studies are true, and in the extreme that most medical research is wrong. Why is this happening, how could this be, and most important are the studies indicating that things are not as was previously believed, in fact, correct? It turns out that much of the basis for these contradictions is the increasingly prevalent use of meta-analysis.

Meta-analysis is a technique which combines the results of multiple studies in order to provide more statistical power to the study. Statistical power is a concept that basically says that as the number of experimental observations increases, the variance (variation of data) decreases and the effect size (difference between control and experimental observations) increases, a study becomes more powerful. Therefore, by using the results of multiple studies statistical power increases, generally due to a greater number of experimental observations.

It is important to keep in mind that meta-analysis is not a single well established technique but rather a general approach with various techniques used to combine results. Most use weighted averages of outcomes with weighting of individual studies based on sample size or event rate. There is nothing wrong with this approach. 

The problem comes in deciding what to combine. I would contend that in most cases the studies being combined should not be. Just because one can combine results does not mean one should combine results. So why would I suggest results should not be combined? The very simple answer is that only rarely are studies similar enough in experimental design to combine. Investigators rarely repeat the exact same study. The differences in the studies are, in fact, likely to be the reason that the results may not concur in the first place. To combine these studies factors out the real reason the results are different. These differences need to be taken into account, not obscured. If the patient populations differ, meta-analysis cannot combine them.  If the study design differs, meta-analysis cannot combine them. If the analytical methods differ, meta-analysis cannot combine them. Yet this is precisely what most meta-analyses do. In a very real manner these meta-analyses factor out the consideration of possibly critical information.

Most meta-analyses test to determine if the studies are too heterogeneous to combine. However, this evaluation assesses heterogeneity of outcomes and not heterogeneity of design. Heterogeneity of design is established by review of the articles by those completing the meta-analysis. They have to be able to both establish the criteria and determine if studies which are candidates for inclusion are, in fact, included. It would seem like very rigorous criteria for inclusion would produce the most rigorous meta-analysis. However, this risks the very real possibility of throwing out most of the information pertaining to a topic and analyzing only a small proportion of the studies. This happened with a controversial meta-analysis published in Lancet (volume 355, p133) which concluded that mammography had no benefit.  This conclusion was made after its criteria for randomization dictated exclusion of six studies with over 3oo, ooo patient were excluded and made its conclusions using only two studies.

The role of author selection of articles for inclusion increases the risk of author bias. An author can select studies that increase the likelihood of a particular outcome. James Coyne has provided a detailed critique of a meta-analysis by Priscilla Coleman published in the British Journal of Psychiatry. He reports “Although there is a vast literature concerning mental health effects of abortion, Coleman selects only 22 studies, 11 of them her own. She indicated that she has excluded other studies as being too poorly designed, but she fails to identify which studies were excluded and specifically why.” There is good reason to believe that Dr. Coleman has biases and has acted on them. The journal editors have been widely criticized for publishing this meta-analysis with multiple methodological flaws. Steven Novella has similarly laid out the methodological deficiencies and biases of a recent meta-analysis purporting to prove the effectiveness of acupuncture.

Many consider meta-analysis a strong level of scientific proof. This may be true in a limited number of circumstances. However these analyses have limited applicability and great potential for error and bias, both unintentional and purposeful. Meta-analysis needs to be received much more critically by both journal editors and reviewers and by readers. Strong evidence deriving from scientifically based medical research is too important not to.

The Alden March Bioethics Institute offers a Master of Science in Bioethics, a Doctorate of Professional Studies in Bioethics, and Graduate Certificates in Clinical Ethics and Clinical Ethics Consultation. For more information on AMBI's online graduate programs, please visit our website.

0 comments | Topics: Research Ethics


Add A Comment
(it will not be displayed)




SEARCH BIOETHICS TODAY
SUBSCRIBE TO BIOETHICS TODAY
ABOUT BIOETHICS TODAY
BIOETHICS TODAY is the blog of the Alden March Bioethics Institute, presenting topical and timely commentary on issues, trends, and breaking news in the broad arena of bioethics. BIOETHICS TODAY presents interviews, opinion pieces, and ongoing articles on health care policy, end-of-life decision making, emerging issues in genetics and genomics, procreative liberty and reproductive health, ethics in clinical trials, medicine and the media, distributive justice and health care delivery in developing nations, and the intersection of environmental conservation and bioethics.
TOPICS