Tag Archives: Critical appraisal

Recommended: When the revolution came for Amy Cuddy

This is one of the best articles I have ever read in the popular press about the complexities of the research process.

This article by Susan Dominus covers some high profile research by Amy Cuddy. She and two co-authors found that your body language not only influences how others view you, but it influences how you view yourself. Striking a “power pose” meaning something like a “legs astride or feet up on a desk” can improve your sense of power and control and these subjective feelings are matched by physiological changes, Your testosterone goes up and your cortisol goes down. Both of these, apparently, are good things.

The research team publishes these findings in Psychological Science, a prominent journal in this field. The article receives a lot of press coverage. Dr. Cuddy becomes the public face of this research, most notably by garnering an invitation to give a TED talk and does a bang-up job. Her talk becomes the second most viewed TED talk of all time.

But there’s a problem. The results of the Psychological Science publication do not get replicated. One of the other two authors expresses doubt about the original research findings. Another research team reviews the data analysis and labels the work “p-hacking”.

It turns out that there is a movement in the research world to critically examine existing research findings and to see if the data truly supports the conclusions that have been made. Are the people leading this movement noble warriors for truth or are they shameless bullies who tear down peer-reviewed research in non-peer-reviewed blogs.

I vote for “noble warriors” but read the article and decide for yourself what you think. It’s a complicated area and every perspective has more than one side to it.

One of the noble warriors/shameless bullies is Andrew Gelman, a popular statistician and social scientist. He comments extensively about the New York Times article on his blog, which is also worth reading as well as many comments that others have made on his blog post. It’s also worth digging up some of his earlier commentary about Dr. Cuddy. Continue reading

PMean: Bad examples of data analysis are bad examples to use in teaching

I’m on various email discussion groups and every once in a while someone sends out a request that sounds something like this.

I’m teaching a class (or running a journal club or giving a seminar) on research design (or evidence based medicine or statistics) and I’d like to find an example of a research study that use bad statistical analysis.

And there’s always a flood of responses back. But if I were less busy, I’d jump into the conversation and say “Stop! Don’t do it!” Here’s why. Continue reading

Recommended: The Empirical Evidence of Bias in Trials Measuring Treatment Differences

When I wrote a book about Evidence Based Medicine back in 2006, I talked about empirical evidence to support the use of certain research methodologies like blinding and allocation concealment. Since that time, many more studies have appeared, more than you or I could easily keep track of. Thankfully, the folks at the Agency for Healthcare Research and Quality commissioned a report to look at studies that empirically evaluate the bias reduction of several popular approaches used in randomized trials. These include

selection bias through randomization (sequence generation and allocation concealment); confounding through design or analysis; performance bias through fidelity to the protocol, avoidance of unintended interventions, patient or caregiver blinding and clinician or provider blinding; detection bias through outcome assessor and data analyst blinding and appropriate statistical methods; detection/performance bias through double blinding; attrition bias through intention-to-treat analysis or other approaches to accounting for dropouts; and reporting bias through complete reporting of all prespecified outcomes.

The general finding was that failure to use these bias reduction approaches tended to exaggerate treatment effects, but the magnitude and precision of these exaggerated effects was inconsistent. Continue reading

Recommended: Differences between information in registries and articles did not influence publication acceptance

Here’s a research article tackling the same problem of changing outcome measures after the data is collected. Apparently, this occurs in 66 of the 226 papers reviewed here or almost 30% of the time. The interesting thing is that whether this occurred or not was independent of whether paper was accepted. So journal editors are missing an opportunity here to improve the quality of the published literature by demanding that researchers abide by the choices that they made during trial registration. Continue reading

Recommended: The COMPare Project

One of the many problems with medical publications is that researchers will choose which outcomes to report based on their statistical significance rather than their clinical importance. This can seriously bias your results. You can easily avoid this potential bias by specifying your primary and secondary outcome measures prior to data collection. Apparently, though, some researchers will change their minds after designating these outcome variables and fail to report on some of the outcomes and/or add new outcomes that were not specified prior to data analysis. How often does this occur? A group of scientists at the Centre for Evidence-Based Medicine at the University of Oxford are trying to find out. Continue reading

PMean: A book review of my first book

I wrote a book about nine years ago and interest in it has largely died down. Perhaps I should write a second edition. Anyway, I ran across a book review that I had not seen before. It was published in 2006, but I never noticed it until now. Sarah Boslaugh wrote the review and it got published in MAA Reviews (MAA stands for Mathematical Association of America). It says some nice things like my approach was “fresh.” Dr. Bosluagh also likes my web site, according to the review. Continue reading

Recommended: Improving Bioscience Research Reporting: The ARRIVE Guidelines for Reporting Animal Research

A lot of people have adapted and updated the CONSORT Guidelines to reporting clinical trials to handle other types of research. One of these adaptations is the ARRIVE guidelines for reporting animal research. Many of these guidelines follow CONSORT quite closely, but there are details, such as documenting the species and strain of the experimental animals and describing the housing conditions, that are specific to animal experiments. Continue reading

Recommended: In search of justification for the unpredictability paradox

This is a commentary on a 2011 Cochrane Review that found substantial differences between studies that were adequately randomized and those that were not adequately randomized. The direction of the difference was not predictable, however, meaning that there was not a consistent bias on average towards overstating the treatment effect or a consistent bias on average towards understating the treatment effect. This leads the authors of the Cochrane review to conclude that “the unpredictability of random allocation is the best protection against the unpredictability of the extent to which non-randomised studies may be biased.” The authors of the commentary provide a critique of this conclusion on several grounds. Continue reading