Tag Archives: Human side of statistics

Recommended: How to be more effective in your professional life

This page is moving to a new website.

Doug Zahn has done a tremendous amount of work on what I like to call the human factors in statistical consulting. He summarizes some key ideas in this article. His humorous anecdote about his prized Mustang car illustrates the tendency of all of us to be poor listeners. Pay special atention to Table 1 where he outlines the five steps you should always follow in any consulting interaction. Continue reading

Recommended: When the revolution came for Amy Cuddy

This page is moving to a new website.

This is one of the best articles I have ever read in the popular press about the complexities of the research process.

This article by Susan Dominus covers some high profile research by Amy Cuddy. She and two co-authors found that your body language not only influences how others view you, but it influences how you view yourself. Striking a “power pose” meaning something like a “legs astride or feet up on a desk” can improve your sense of power and control and these subjective feelings are matched by physiological changes, Your testosterone goes up and your cortisol goes down. Both of these, apparently, are good things.

The research team publishes these findings in Psychological Science, a prominent journal in this field. The article receives a lot of press coverage. Dr. Cuddy becomes the public face of this research, most notably by garnering an invitation to give a TED talk and does a bang-up job. Her talk becomes the second most viewed TED talk of all time.

But there’s a problem. The results of the Psychological Science publication do not get replicated. One of the other two authors expresses doubt about the original research findings. Another research team reviews the data analysis and labels the work “p-hacking”.

The term “p-hacking” is fairly new, but other terms, like “data dredging” and “fishing expedition” have been around for a lot longer. There’s a quote attributed to the economist Robert Coase that is commonly cited in this context, “If you torture the data long enough, it will confess to anything.” I have described it as “running ten tests and then picking the one with the smallest p-value.” Also relevant is this XKCD cartoon.

If p-hacking is a real thing (and there’s some debate about that), then it is a lot more subtle than the quotes and cartoon mentioned above. You can find serious and detailed explanations at a FiveThirtyEight web article by Christie Aschwanden and this 2015 PLOS article by Megan Head et al.

If p-hacking is a problem, then how do you fix it? It turns out that there is a movement in the research world to critically examine existing research findings and to see if the data truly supports the conclusions that have been made. Are the people leading this movement noble warriors for truth or are they shameless bullies who tear down peer-reviewed research in non-peer-reviewed blogs?

I vote for “noble warriors” but read the article and decide for yourself what you think. It’s a complicated area and every perspective has more than one side to it.

One of the noble warriors/shameless bullies is Andrew Gelman, a popular statistician and social scientist. He comments extensively about the New York Times article on his blog, which is also worth reading as well as many comments that others have made on his blog post. It’s also worth digging up some of his earlier commentary about Dr. Cuddy. Continue reading

PMean: What does large mean when talking about negative values?

Dear Professor Mean, I saw a paper where the authors said that they wanted a diagnostic test with a large negative likelihood ratio, because it was important to rule out a condition. False negatives mean leaving a high risk condition untreated. But don’t they mean that they want a diagnostic test with a small likelihood ratio?

Okay, I agree with you, but it’s an understandable mistake. Let’s quickly review the idea of likelihood ratios. A positive likelihood ratio is defined at Sn / (1-Sp) where Sn is the sensitivity of the diagnostic test and Sp is the specificity. For a diagnostic test with a very high specificity, you get a very large ratio, because you are putting a really small value in the denominator. For Sp=0.99, for example, you would end up getting a positive likelihood ratio of 50 or more (assuming that Sn is at least 0.5).

The positive likelihood ratio is a measure of how much the odds of disease are increased if the diagnostic test is positive.

A negative likelihood ratio is defined as as (1-Sn) / Sp. For a diagnostic test with a very large sensitivity, the negative likelihood ratio is very close to zero. For Sn=0.99, the likelihood ratio is going to be 0.02 or smaller, assuming that Sp is at least 0.5.

The negative likelihood ratio is a measure of how much the odds of disease are decreased if the diagnostic test is negative.

The two likelihood ratios should remind you of the acronyms SpIn and SnOut. SpIn means that if specificity is large, then a positive diagnostic test is good at ruling in the disease. This isn’t always the case, sadly, and for many diagnostic tests, the next step after a positive test is not to treat the disease, but to double check things using a more expensive or more invasive test.

SnNout means that if the sensitivity is large, then a negative diagnostic test is good at ruling out the disease. You can safely send the patient home in some settings, or start looking for other diseases in different settings.

That sounds great, but sometimes you are very concerned about false negatives, and you don’t want to send someone home if they actually have the disease. If you are worried about a cervical fracture, ruling out the fracture and sending someone home might lead to paralysis or death if you have a false negative. So you want to be very sure of yourself in this setting.

Now with regard to the comment above, I think it is just a case of careless language. When the authors say “large negative likelihood ratio”, they should have said “extreme negative likelihood ratio” meaning a likelihood ratio much much smaller than one. I’ve done it myself when I talk about a correlation of -0.8 as being a “big” correlation because it is very far away from zero.

We tend to shy away from words like “small” when we talk about a negative likelihood ratio being much less than 1, because “small” in some people’s minds means “inconsequential” when the opposite is true. When I am careful in my language, I try to use the word “extreme” to mean very far away from the null value (1 for a likelihood ratio or 0 for a correlation) rather than “large” or “small”.

PMean: Getting out of the free consulting trap

This page is moving to a new website.

Someone on the Statistical Consulting Section message board asked a question about how to handle a situation where a colleague was repeatedly asking for advice. How do you make a transition from offering free advice to getting paid as a consultant? There were lots of good answers, and here’s the suggestion that I offered. Continue reading

Recommended: Why be an independent consultant?

This page is moving to a new website.

I might as well recommend something that I wrote. This is a short article in the Amstat News, a monthly newsletter of the American Statistical Association. I talk about all the reasons you wouldn’t want to be an independent consultant and the one big reason why you would–being in control. Continue reading

Why secondary data analysis takes a lot longer

Someone posted a question noting that most of the statistical consulting projects that they worked on finished in a reasonable time frame, a few were outliers. They took a lot longer and required a lot more effort by the statisticians. Were there any common features to these outliers they wondered. So they asked if anyone else had identified methodological features of projects that went overtime. I only had a subjective impression, but thought it was still worth sharing. Continue reading