I ran across a nice discussion of how to write the results section of a research paper, but it has one comment about the phrase “trend towards significance” that I had to disagree with. So I wrote a comment that they may or may not end up publishing (note: it did look like the published my comment, but it’s a bit tricky to find).
Here’s what I submitted. Continue reading
This is a new effort to get data out into the open for others to use. A data note can be on data that was not published or it could be an addendum describing data used in another publication. This is just getting started, but could end up being a great teaching resource. Continue reading
There is more than one way to approach a data analysis and some of the ways lead to easier modifications and updates and help make your work more reproducible. This paper talks about steps that they recommend based on years of teaching software carpentry and data carpentry classes. One of the software products mentioned in this article, OpenRefine, looks like a very interesting way to clean up messy data in a way that leaves a well documented trail. Continue reading
Like a lot of public universities, UMKC is having a lot of financial difficulty. They are asking for advice from faculty members on how to address this budget shortfall. Not being the bashful type, I suggested that we stop paying commercial software vendors and commercial journal publishers and rely instead on open source. Here’s the details of my letter. Continue reading
I attended a talk about a decade ago on the problems with for-profit publishing of scientific research and the need to aggressively adopt the open source publication model. It was a message I was ready for, because I had benefited greatly from citing open source resources on my website. I knew that if I cited an open source resource, anyone anywhere could look up that resource. They didn’t need access to a University Library.
This article explains how the for-profit research journals (perhaps better described as a reader-pays model, in contrast to an author-pays model) developed a system that locked in research libraries to their product and then hiked the price. Then they developed journal bundles that further squeezed libraries by forcing them into a take-it-all-or-leave-it-all system that devastated their budgets.
There is still a struggle between the reader-pays model of for-profit publishing and the author-pays model of open source publishing, and I believe there is room for both approaches, though I would argue that we need to promote open source publishing more aggressively than we currently are doing.
This article provides a very nice historical context to the development of for-profit publishing in scientific research. It oversimplifies things, perhaps, and may be a bit too harsh, but it is definitely worth reading.
As an ironic footnote, newspapers have been devastated by the Internet because of the expectations of readers that all of their content should be available for free. There is a note at the bottom of the Guardian article that reads: “Since you’re here we have a small favour to ask. More people are reading the Guardian than ever but advertising revenues across the media are falling fast. And unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can. So you can see why we need to ask for your help. The Guardian’s independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our perspective matters – because it might well be your perspective, too.”
Take some time to read this and think about it. I normally ignore pitches like this on Wikipedia and elsewhere, but the irony of citing a newspaper article available for free to criticize for-profit research publishing got to me, so I became a supporter of the Guardian at $6.99 per month.
I’m giving a short talk about the Kaplan-Meier curve and found out an interesting fact about the 1958 paper by Edward Kaplan and Paul Meier that introduced this curve. It represents the 11th most cited research paper of all time. There’s a nice graphic in a Nature paper that allows you to review the top 100 most cited papers of all time. There are a few other statistics papers on this list as well. Continue reading
I reviewed a paper for PLOS One in 2014 and got a nice acknowledgment, but I also reviewed a paper for the same journal in 2015. Here’s the acknowledgment for that contribution. They’re still having a bit of trouble with alphabetization (Steve Simon should be the last “Simon” on the list, but it’s not). Still, it’s nice to have a public record of my small contribution. Continue reading
Something came up in our department about a predatory pay for access journal that was soliciting support. All the appropriate warnings were made (there’s a nice explanation of predatory open access publishing at Wikipedia, if you’re curious). But I felt that I had to made a strong defense of the value provided by legitimate open access publishers. Here’s a summary of what I wrote. Continue reading
The same blog that I highlighted below had a commentary about how clinicians almost never publish pre-prints of their work. This is in contrast to other fields, most notably Astronomy, where pre-prints are the norm. If clinicians are reluctant, the Ingelfinger rule may be to blame. Continue reading
I don’t do nearly enough peer reviewing, in part because it is a thankless, anonymous task. But one journal editor sent me a nice email pointing out that my name was listed along with 80,000 other reviewer names for helping out with peer review of an article in 2014 for PLOS ONE. If you click on the link on the article and go down about 61,000 lines, you’ll find my name. Caution, the list is not quite perfectly in alphabetical order (Simons and Simonton should come AFTER Simon). Continue reading