I am drafting up a policy on statistical support for research at my part-time job at UMKC. It is loosely based on standards at the University of California, Davis and Kansas University Medical Center. An early draft appears below. I’ve gotten some suggestions that setting a minimum percentage effort is a bad idea. What do you think?
I have to help write NIH grants from time to time, and I need to always keep front and center the criteria that NIH peer reviewers use when they evaluate grants. They look at five broad areas: significance, investigators, innovation, approach, and environment. This document explains what each of these five broad areas means. Continue reading
What percentage effort is reasonable for Biostatistics support on a research grant? The UC Davis Biostatistics Group says 10% as a bare minimum, 35-60% for straightforward projects with uncomplicated analyses, and 50-100%+ for large or complex projects. They give examples of large and complex projects: interim analysis, multi-site projects, development of novel statistical methods, and assembly of data from large, complex, or poorly documented administrative or survey data sets.
They also describe how to split the effort between a PhD Biostatistician, who supervises the overall effort, and a MS Biostatistician, who does most of the data management and statistical analysis.
Another point worth noting is that any grant listing less than 10% effort for a Biostatistician requires a special sign off. Continue reading
If you are writing a research grant, there are a lot of statistical issues that you need to consider. This guide, prepared by the American Statistical Association, highlights three areas: framing the problem, designing the study, and specifying the data analysis plan. It doesn’t talk enough about data management, but otherwise it is an excellent resource. Continue reading
I came across a question, “How does your institution incentivize researchers to write more grants?” that was posted a while ago. I felt it was too late to respond directly, but I did want to mention something in my blog about this. “Incentivize” is one of those awful words that used to be a noun (incentive) but has been changed to a verb to make it sound more trendy. That’s something to dislike from the very start, but I have an even greater gripe about incentivizing. Continue reading
Michael Lauer, the Deputy Director for Extramural Research at the United States National Institutes for Health shows some interesting statistics on when people submit grants and shows that grants submitted earlier than the day of the deadline tend to fare slightly better in the review process. There’s one gross miscalculation on this page, but the message is still interesting. Continue reading
This article provides guidance for developing the “statistical considerations” section of a research grant. I normally do not use that term, and suggest separate sections on statistical methods, sample size justification, data management plan, etc. But that’s a quibble. This is very good practical advice, such as reminding you that you need to write both for the statistical reviewer and the non-statistician who is also reviewing the proposal. Continue reading
I usually do not recommend commercial products, as I know most of you have very limited funds. But when it comes to grants, you should consider paying for good training. The best grant writing class I ever took was from David Morrison, who is part of Grant Writers Seminars and Workshops. Also good are the seminars produced by the Grant Training Center. Details on both groups are listed below. Continue reading
I was reviewing a grant and the section on limitations and alternative strategies started off with the following sentence, “We do not anticipate any major limitations in conducting this research.” I suggested in my comments that this was a bad way to start off this section. Here’s why. Continue reading
This blog entry from the International Initiative for Impact Evaluation talks about the deficiency in many research proposals sent to that organization. They rely too much on standardized effect sizes, which are impossible to interpret and often misleading. The authors also criticize the Intraclass Correlation Coefficients (ICCs) that are included in the sample size justification for many cluster based or hierarchical research designs. The ICCs, they say, often seem to be pulled out of thin air. It is a hard number to get sometimes and they suggest that you consider a range of ICCs in your calculations or that you run a pilot study. Continue reading