This page is moving to a new website.
Tag Archives: Bayesian statistics
PMean: How to run your first Bayesian analysis using jags software in R
This page is moving to a new website.
Someone wanted to know how to run a Bayesian data analysis for a two group longitudinal study. There are several ways you can do this, but I had to confess I did not have an immediate answer. So I took some time to figure out how to do this using jags software inside of R. I’ve done a fair amount of stuff in jags, but not anything close to a longitudinal design. The general principle is to start with something easy and work your way slowly up to the final analysis. Continue reading
PMean: Can you recommend an introductory book on Bayesian Statistics
I got an email asking for a recommendation for an introductory book on Bayesian Statistics from someone who recently graduated from our program. It’s kind of a difficult request because the mathematical demands needed to understand Bayesian statistics are not trivial. Here’s what I recommended. Continue reading
PMean: Too many different prior choices for the hierarchical beta binomial model
I’m interested in studying how Bayesian hierarchical models work and I want to start with what seems like the simplest case, the hierarchical beta-binomial model. It’s actually not that simple, it seems. There are too many choices for the hyperprior that you use in this setting. Continue reading
Recommended: Bayesian computing with INLA
This page promotes a new approach to a broad class of models (spatio-temporal models, latent variable models, mixed models) using a fast approximation to the Bayesian solution. It runs under R and appears to handle very large datasets. I have not had a chance to try this, but it looks very interesting. Continue reading
Recommended: Editorial (Basic and Applied Social Psychology)
Recommended does not always mean that I agree with what’s written. In this case, it means that this is something that is important to read because it offers an important perspective. And this editorial offers the perspective that all p-values and all confidence intervals are so fatally flawed that they are banned from all future publications in this journal. The editorial goes further to criticize most Bayesian methods because of the problems with the “Laplacian assumption.” The editorial authors have trouble with some of the ambiguities associated with creating a non-informative prior distribution that is, a prior distribution that represents a “state of ignorance.” They will accept Bayesian analyses on a case by case basis. Throwing out most Bayesian analyses, all p-values, and all confidence intervals makes you wonder what they will accept. They suggest larger than typical sample sizes, strong descriptive statistics (which they fail to define), and effect sizes. They believe that by “banning the NHSTP will have the effect of increasing the quality of submitted manuscripts by liberating authors from the stultified structure of NHSTP thinking thereby eliminating an important obstacle to creative thinking.” It’s worth debating this issue, though I think that these recommendations are far too extreme. Continue reading
PMean: What is the probability of a probability of one
Someone wrote asking me about a variation of the “Rule of Three”. This rule says that if you observe zero events out of n, an upper 95% confidence limit for n is approximately 3/n. So suppose you operated on 10 patients and none of them died after surgery. Then you would be 95% confident that the mortality rate would be 30% (3/10) or less. This person asked “Suppose I repeatedly sample from a population and every patient in the sample was a G. What is the how likely is it that the entire population is Gs?” This flips the problem around, and is equivalent to saying that the probability of survival is 97% or greater. But this person wanted an estimate of the probability that the probability in the population is 1. Continue reading
PMean: Using BUGS within the R programming environment
I am giving a talk today for the Kansas City R Users group about BUGS (Bayes Using Gibbs Sampler). I have already written extensively about BUGS and the interface to BUGS from within the R programming environment, and you can find these on my category page for Bayesian statistics. Here is a quick overview of why you might want to use BUGS and how you would use it. I’ve included links to the relevant pages on my website so you can explore this topic further on your own. Continue reading
PMean: Is Possibility Theory better than Probability Theory?
This page has moved to a new website.
PMean: The cost of a bad prediction
Paul Krugman wrote up an interesting application of Bayes Theorem on his blog on the New York Times. I want to adapt his example and expand it a bit. Continue reading