Author Archives: pmean

Recommended: Tessera. Open source environment for deep analysis of large complex data

I have not had time to preview this software, but it looks very interesting, It takes large problems and converts them to a form for parallel processing, not by changing the underlying algorithm, which would be very messy, but by splitting the data into subsets, analyzing each subset, and recombining these results. Such a method “Divide and Recombine” should work well for some analysis, but perhaps not so well for others. It is based on the R programming language. If I get a chance to work with this software, I’ll let you know what I think. Continue reading

Recommended: Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement

If you are writing up a paper that uses a complex regression model (complex meaning multiple independent variables), you need to document information that allows the reader to assess the quality of the predictions that your model would produce. This paper provides a checklist of things that you need to document in such a paper, and is an extension of the CONSORT guidelines to this particular type of research. Continue reading

Recommended: In search of justification for the unpredictability paradox

This is a commentary on a 2011 Cochrane Review that found substantial differences between studies that were adequately randomized and those that were not adequately randomized. The direction of the difference was not predictable, however, meaning that there was not a consistent bias on average towards overstating the treatment effect or a consistent bias on average towards understating the treatment effect. This leads the authors of the Cochrane review to conclude that “the unpredictability of random allocation is the best protection against the unpredictability of the extent to which non-randomised studies may be biased.” The authors of the commentary provide a critique of this conclusion on several grounds. Continue reading

Recommended: Requiring fuel gauges. A pitch for justifying impact evaluation sample size assumptions

This blog entry from the International Initiative for Impact Evaluation talks about the deficiency in many research proposals sent to that organization. They rely too much on standardized effect sizes, which are impossible to interpret and often misleading. The authors also criticize the Intraclass Correlation Coefficients (ICCs) that are included in the sample size justification for many cluster based or hierarchical research designs. The ICCs, they say, often seem to be pulled out of thin air. It is a hard number to get sometimes and they suggest that you consider a range of ICCs in your calculations or that you run a pilot study. Continue reading