Monthly Archives: January 2014

Recommended: Will 2015 be the Beginning of the End for SAS and SPSS?

This is a provocative title but the actual blog post itself is far more nuanced. By several measures, R is becoming more and more popular, but that hardly means that SAS or SPSS will disappear anytime soon. And even though I disagree with some of the methodology, the author, Bob Muenchen, deserves lots of credit for backing up his comments with empirical data. The thing about most statisticians, is that when they aren’t doing stuff for other people, they are just as prone to relying on anecdotal data. Continue reading

PMean: Calculating statistics that have limitations

Someone asked what you should do if you get a request to compute a p-value on a post hoc analysis. In general, any statistic computed on a post hoc analysis is likely to be biased, but a p-value is especially troublesome as it is much more prone to misuse or misinterpretation. Should you refuse to calculate this p-value? Here’s what I said. Continue reading

Recommended: The Next Billionaire: A Statistician Who Changed Medicine.

This is a nice profile of Dennis Gillings, a statistician who started a small company in the 1980s with only 5 employees. This company, Quintiles, is now the world’s largest CRO (Clinical Research Organization). The article appeared in May 2013, shortly before Quintiles offered an IPO that would turn Dr. Gillings into a billionaire. Continue reading

PMean: The dget function in R is very slow

I made another rookie mistake in R. I have a program in R that needed to store a large matrix for later re-use. You can use the dput function to put a copy of the matrix out on your local hard drive, and you can retrieve it later with dget. It turns out that dput ran pretty quickly, but dget was very very slow. The matrix was large (320 rows by 320 columns) but it still seemed to be too slow. It turns out that I didn’t really understand how R works. Continue reading

PMean: No power calculation for a Phase II trial

There was an discussion on the message board for the Statistical Consulting Section of the American Statistical Association started by a question about a Phase II trial. The questioner was part of an Institutional Review Board and was reviewing a proposal for a Phase II clinical trial. This particular trial had a fairly small sample size with no justification of the choice of sample size. The questioner wanted to know if this was the norm for Phase II trials. Here are some of my thoughts combined with a synthesis of other comments. Continue reading

Recommended: Large randomized controlled trials are ready for retirement

Dean Ornish contirbutes his response to a series of invited essays on the topic “What Scientific Idea is Ready for Retirement?” His choice is the large randomized controlled trial. While I believe his criticism is too one-sided, he does raise some interesting points about the difficulty in using large trials to assess behavioral interventions. Continue reading