I attended a talk about a decade ago on the problems with for-profit publishing of scientific research and the need to aggressively adopt the open source publication model. It was a message I was ready for, because I had benefited greatly from citing open source resources on my website. I knew that if I cited an open source resource, anyone anywhere could look up that resource. They didn’t need access to a University Library.
This article explains how the for-profit research journals (perhaps better described as a reader-pays model, in contrast to an author-pays model) developed a system that locked in research libraries to their product and then hiked the price. Then they developed journal bundles that further squeezed libraries by forcing them into a take-it-all-or-leave-it-all system that devastated their budgets.
There is still a struggle between the reader-pays model of for-profit publishing and the author-pays model of open source publishing, and I believe there is room for both approaches, though I would argue that we need to promote open source publishing more aggressively than we currently are doing.
This article provides a very nice historical context to the development of for-profit publishing in scientific research. It oversimplifies things, perhaps, and may be a bit too harsh, but it is definitely worth reading.
As an ironic footnote, newspapers have been devastated by the Internet because of the expectations of readers that all of their content should be available for free. There is a note at the bottom of the Guardian article that reads: “Since you’re here we have a small favour to ask. More people are reading the Guardian than ever but advertising revenues across the media are falling fast. And unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can. So you can see why we need to ask for your help. The Guardian’s independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our perspective matters – because it might well be your perspective, too.”
Take some time to read this and think about it. I normally ignore pitches like this on Wikipedia and elsewhere, but the irony of citing a newspaper article available for free to criticize for-profit research publishing got to me, so I became a supporter of the Guardian at $6.99 per month.
I’m giving a short talk about the Kaplan-Meier curve and found out an interesting fact about the 1958 paper by Edward Kaplan and Paul Meier that introduced this curve. It represents the 11th most cited research paper of all time. There’s a nice graphic in a Nature paper that allows you to review the top 100 most cited papers of all time. There are a few other statistics papers on this list as well. Continue reading
I reviewed a paper for PLOS One in 2014 and got a nice acknowledgment, but I also reviewed a paper for the same journal in 2015. Here’s the acknowledgment for that contribution. They’re still having a bit of trouble with alphabetization (Steve Simon should be the last “Simon” on the list, but it’s not). Still, it’s nice to have a public record of my small contribution. Continue reading
Something came up in our department about a predatory pay for access journal that was soliciting support. All the appropriate warnings were made (there’s a nice explanation of predatory open access publishing at Wikipedia, if you’re curious). But I felt that I had to made a strong defense of the value provided by legitimate open access publishers. Here’s a summary of what I wrote. Continue reading
The same blog that I highlighted below had a commentary about how clinicians almost never publish pre-prints of their work. This is in contrast to other fields, most notably Astronomy, where pre-prints are the norm. If clinicians are reluctant, the Ingelfinger rule may be to blame. Continue reading
I don’t do nearly enough peer reviewing, in part because it is a thankless, anonymous task. But one journal editor sent me a nice email pointing out that my name was listed along with 80,000 other reviewer names for helping out with peer review of an article in 2014 for PLOS ONE. If you click on the link on the article and go down about 61,000 lines, you’ll find my name. Caution, the list is not quite perfectly in alphabetical order (Simons and Simonton should come AFTER Simon). Continue reading
If you are writing up a paper that uses a complex regression model (complex meaning multiple independent variables), you need to document information that allows the reader to assess the quality of the predictions that your model would produce. This paper provides a checklist of things that you need to document in such a paper, and is an extension of the CONSORT guidelines to this particular type of research. Continue reading