Tag Archives: Critical appraisal

Recommended: Is vaccine effectiveness (VE) different from vaccine efficacy

This page is moving to a new website.

This is a non-technical discussion of the difference between effectiveness and efficacy (two easily confused terms) in the context of vaccination. Short answer: efficacy is a measurement under ideal circumstances while effectiveness is a measurement in a “real-world” setting. Continue reading

Recommended: An introduction to implementation science for the non-specialist

This page is moving to a new website.

I’ve done a lot of work with Evidence-Based Health, but one big and largely unsolved problem is how to get health care professionals to change their practices once the evidence for these changes becomes obvious. If no one changes in the face of evidence, then all the effort to produce and critically appraise the evidence becomes worthless. A new field, implementation science, has been developed to get at methods to encourage the adoption of new evidence-based practices. This paper outlines how implementation science is supposed to work and offers two real world examples of implementation science studies. Continue reading

Recommended: Philosophy News Network: Postmodernism Special Report

This page is moving to a new website.

I generally shy away from Philosophical debates, but I did discuss a Postmodern critique of Evidence Based Medicine a while back. When one of my more intellectual friends posted a link to a commentary on Postmodernism on the Existential Comics web site, I had to take a look. I think I did a pretty good job of summarizing Postmodernism without stereotyping it, but maybe I’m setting my standards too low if I try to compete with a comic strip. You can judge for yourself. Continue reading

Recommended: When the revolution came for Amy Cuddy

This page is moving to a new website.

This is one of the best articles I have ever read in the popular press about the complexities of the research process.

This article by Susan Dominus covers some high profile research by Amy Cuddy. She and two co-authors found that your body language not only influences how others view you, but it influences how you view yourself. Striking a “power pose” meaning something like a “legs astride or feet up on a desk” can improve your sense of power and control and these subjective feelings are matched by physiological changes, Your testosterone goes up and your cortisol goes down. Both of these, apparently, are good things.

The research team publishes these findings in Psychological Science, a prominent journal in this field. The article receives a lot of press coverage. Dr. Cuddy becomes the public face of this research, most notably by garnering an invitation to give a TED talk and does a bang-up job. Her talk becomes the second most viewed TED talk of all time.

But there’s a problem. The results of the Psychological Science publication do not get replicated. One of the other two authors expresses doubt about the original research findings. Another research team reviews the data analysis and labels the work “p-hacking”.

The term “p-hacking” is fairly new, but other terms, like “data dredging” and “fishing expedition” have been around for a lot longer. There’s a quote attributed to the economist Robert Coase that is commonly cited in this context, “If you torture the data long enough, it will confess to anything.” I have described it as “running ten tests and then picking the one with the smallest p-value.” Also relevant is this XKCD cartoon.

If p-hacking is a real thing (and there’s some debate about that), then it is a lot more subtle than the quotes and cartoon mentioned above. You can find serious and detailed explanations at a FiveThirtyEight web article by Christie Aschwanden and this 2015 PLOS article by Megan Head et al.

If p-hacking is a problem, then how do you fix it? It turns out that there is a movement in the research world to critically examine existing research findings and to see if the data truly supports the conclusions that have been made. Are the people leading this movement noble warriors for truth or are they shameless bullies who tear down peer-reviewed research in non-peer-reviewed blogs?

I vote for “noble warriors” but read the article and decide for yourself what you think. It’s a complicated area and every perspective has more than one side to it.

One of the noble warriors/shameless bullies is Andrew Gelman, a popular statistician and social scientist. He comments extensively about the New York Times article on his blog, which is also worth reading as well as many comments that others have made on his blog post. It’s also worth digging up some of his earlier commentary about Dr. Cuddy. Continue reading

PMean: Bad examples of data analysis are bad examples to use in teaching

I’m on various email discussion groups and every once in a while someone sends out a request that sounds something like this.

I’m teaching a class (or running a journal club or giving a seminar) on research design (or evidence based medicine or statistics) and I’d like to find an example of a research study that use bad statistical analysis.

And there’s always a flood of responses back. But if I were less busy, I’d jump into the conversation and say “Stop! Don’t do it!” Here’s why. Continue reading

Recommended: The Empirical Evidence of Bias in Trials Measuring Treatment Differences

When I wrote a book about Evidence Based Medicine back in 2006, I talked about empirical evidence to support the use of certain research methodologies like blinding and allocation concealment. Since that time, many more studies have appeared, more than you or I could easily keep track of. Thankfully, the folks at the Agency for Healthcare Research and Quality commissioned a report to look at studies that empirically evaluate the bias reduction of several popular approaches used in randomized trials. These include

selection bias through randomization (sequence generation and allocation concealment); confounding through design or analysis; performance bias through fidelity to the protocol, avoidance of unintended interventions, patient or caregiver blinding and clinician or provider blinding; detection bias through outcome assessor and data analyst blinding and appropriate statistical methods; detection/performance bias through double blinding; attrition bias through intention-to-treat analysis or other approaches to accounting for dropouts; and reporting bias through complete reporting of all prespecified outcomes.

The general finding was that failure to use these bias reduction approaches tended to exaggerate treatment effects, but the magnitude and precision of these exaggerated effects was inconsistent. Continue reading

Recommended: Differences between information in registries and articles did not influence publication acceptance

This page has moved to a new website.

Here’s a research article tackling the same problem of changing outcome measures after the data is collected. Apparently, this occurs in 66 of the 226 papers reviewed here or almost 30% of the time. The interesting thing is that whether this occurred or not was independent of whether paper was accepted. So journal editors are missing an opportunity here to improve the quality of the published literature by demanding that researchers abide by the choices that they made during trial registration. Continue reading

Recommended: The COMPare Project

This page has moved to a new website.

One of the many problems with medical publications is that researchers will choose which outcomes to report based on their statistical significance rather than their clinical importance. This can seriously bias your results. You can easily avoid this potential bias by specifying your primary and secondary outcome measures prior to data collection. Apparently, though, some researchers will change their minds after designating these outcome variables and fail to report on some of the outcomes and/or add new outcomes that were not specified prior to data analysis. How often does this occur? A group of scientists at the Centre for Evidence-Based Medicine at the University of Oxford are trying to find out. Continue reading