Dear Professor Mean, I ran a statistical test in SPSS and got a p-value of .000. I re-ran the same data in Microsoft Excel and got a p-value of 3.9433E-9. I know from scientific notation that this is the same as 0.0000000039433. Why are these numbers different? Continue reading

# Tag Archives: Hypothesis testing

# Recommended: Proving the null hypothesis in clinical trials

I’m attending a great short course on non-inferiority trials and the speaker provided a key reference of historical interest. This reference is the one that got the Statistics community interested in the concept of non-inferiority. The full text is behind a paywall, but you can look at the abstract. A footnote is a paper, Dunnett and Gent 1977, (also trapped behind a paywall) addressed this problem earlier. Continue reading

# PMean: Looking inside the brains of scientists

I found an interesting research study that shows what happens inside the brains of scientists as they view statistical graphs of the type commonly used in peer-reviewed research. I don’t have the citation in front of me, but it was published in a very prominent research journal. Here’s a brief summary of the research. Continue reading

# Recommended: Points to consider on switching between superiority and non-inferiority

One of the most confusing aspects of medical research is the difference between non-inferiority and superiority trials. This article explains in simple terms what the two type of trials are. Then it covers the desire of many researchers to switch from a non-inferiority trail to a superiority trial or vice versa. In general, if you would like to make the claim of superiority if the data justifies it, or to fall back on a claim of non-inferiority if you must, you are best off designing a high quality non-inferiority trial. The extra methodological rigor and the typically larger sample sizes that come with a non-inferiority trial make the transition from a non-inferiority hypothesis to a superiority hypothesis much smoother than the reverse. A high quality non-inferiority trial includes pre-specifying the margin of non-inferiority, demonstrating adequate power for the non-inferiority hypothesis, and justifying that the control group has demonstrated efficacy in previous trials. You need to show sufficient methodological rigor in your research design to establish that a non-inferiority finding is not just caused by an insensitive research design. Finally, you need to consider a “per protocol” analysis for the non-inferiority hypothesis, but switch to an “intention to treat” analysis for the superiority hypothesis. Continue reading

# Recommended: Editorial (Basic and Applied Social Psychology)

Recommended does not always mean that I agree with what’s written. In this case, it means that this is something that is important to read because it offers an important perspective. And this editorial offers the perspective that all p-values and all confidence intervals are so fatally flawed that they are banned from all future publications in this journal. The editorial goes further to criticize most Bayesian methods because of the problems with the “Laplacian assumption.” The editorial authors have trouble with some of the ambiguities associated with creating a non-informative prior distribution that is, a prior distribution that represents a “state of ignorance.” They will accept Bayesian analyses on a case by case basis. Throwing out most Bayesian analyses, all p-values, and all confidence intervals makes you wonder what they will accept. They suggest larger than typical sample sizes, strong descriptive statistics (which they fail to define), and effect sizes. They believe that by “banning the NHSTP will have the effect of increasing the quality of submitted manuscripts by liberating authors from the stultified structure of NHSTP thinking thereby eliminating an important obstacle to creative thinking.” It’s worth debating this issue, though I think that these recommendations are far too extreme. Continue reading

# Recommended: P-Values

Randall Munroe, author of the xkcd comic strip, will often comment on Statistics. This cartoon shows how p-values are typically interpreted. Continue reading

# PMean: Calculating statistics that have limitations

Someone asked what you should do if you get a request to compute a p-value on a post hoc analysis. In general, any statistic computed on a post hoc analysis is likely to be biased, but a p-value is especially troublesome as it is much more prone to misuse or misinterpretation. Should you refuse to calculate this p-value? Here’s what I said. Continue reading