Tag Archives: Critical appraisal

PMean: Bad examples of data analysis are bad examples to use in teaching

I’m on various email discussion groups and every once in a while someone sends out a request that sounds something like this.

I’m teaching a class (or running a journal club or giving a seminar) on research design (or evidence based medicine or statistics) and I’d like to find an example of a research study that use bad statistical analysis.

And there’s always a flood of responses back. But if I were less busy, I’d jump into the conversation and say “Stop! Don’t do it!” Here’s why. Continue reading

Recommended: The Empirical Evidence of Bias in Trials Measuring Treatment Differences

When I wrote a book about Evidence Based Medicine back in 2006, I talked about empirical evidence to support the use of certain research methodologies like blinding and allocation concealment. Since that time, many more studies have appeared, more than you or I could easily keep track of. Thankfully, the folks at the Agency for Healthcare Research and Quality commissioned a report to look at studies that empirically evaluate the bias reduction of several popular approaches used in randomized trials. These include

selection bias through randomization (sequence generation and allocation concealment); confounding through design or analysis; performance bias through fidelity to the protocol, avoidance of unintended interventions, patient or caregiver blinding and clinician or provider blinding; detection bias through outcome assessor and data analyst blinding and appropriate statistical methods; detection/performance bias through double blinding; attrition bias through intention-to-treat analysis or other approaches to accounting for dropouts; and reporting bias through complete reporting of all prespecified outcomes.

The general finding was that failure to use these bias reduction approaches tended to exaggerate treatment effects, but the magnitude and precision of these exaggerated effects was inconsistent. Continue reading

Recommended: Differences between information in registries and articles did not influence publication acceptance

Here’s a research article tackling the same problem of changing outcome measures after the data is collected. Apparently, this occurs in 66 of the 226 papers reviewed here or almost 30% of the time. The interesting thing is that whether this occurred or not was independent of whether paper was accepted. So journal editors are missing an opportunity here to improve the quality of the published literature by demanding that researchers abide by the choices that they made during trial registration. Continue reading

Recommended: The COMPare Project

One of the many problems with medical publications is that researchers will choose which outcomes to report based on their statistical significance rather than their clinical importance. This can seriously bias your results. You can easily avoid this potential bias by specifying your primary and secondary outcome measures prior to data collection. Apparently, though, some researchers will change their minds after designating these outcome variables and fail to report on some of the outcomes and/or add new outcomes that were not specified prior to data analysis. How often does this occur? A group of scientists at the Centre for Evidence-Based Medicine at the University of Oxford are trying to find out. Continue reading

PMean: A book review of my first book

I wrote a book about nine years ago and interest in it has largely died down. Perhaps I should write a second edition. Anyway, I ran across a book review that I had not seen before. It was published in 2006, but I never noticed it until now. Sarah Boslaugh wrote the review and it got published in MAA Reviews (MAA stands for Mathematical Association of America). It says some nice things like my approach was “fresh.” Dr. Bosluagh also likes my web site, according to the review. Continue reading

Recommended: Improving Bioscience Research Reporting: The ARRIVE Guidelines for Reporting Animal Research

A lot of people have adapted and updated the CONSORT Guidelines to reporting clinical trials to handle other types of research. One of these adaptations is the ARRIVE guidelines for reporting animal research. Many of these guidelines follow CONSORT quite closely, but there are details, such as documenting the species and strain of the experimental animals and describing the housing conditions, that are specific to animal experiments. Continue reading

Recommended: In search of justification for the unpredictability paradox

This is a commentary on a 2011 Cochrane Review that found substantial differences between studies that were adequately randomized and those that were not adequately randomized. The direction of the difference was not predictable, however, meaning that there was not a consistent bias on average towards overstating the treatment effect or a consistent bias on average towards understating the treatment effect. This leads the authors of the Cochrane review to conclude that “the unpredictability of random allocation is the best protection against the unpredictability of the extent to which non-randomised studies may be biased.” The authors of the commentary provide a critique of this conclusion on several grounds. Continue reading