Jean Opsomer highlights the rise in dubious journals and conferences, and asks whether we—particularly new researchers—should be concerned:

Many of us regularly receive email invitations to submit articles to journals we have never heard of, to join editorial boards for those same journals (often in the same email!), or to attend conferences in exotic locales organized by societies with names that are close to—but not quite the same as—those we are familiar with. I have been considering this a minor annoyance, part of the spam flotsam encountered while navigating the internet. But as a recent article in the New York Times makes clear, these email solicitations are actually the activities of aggressive and growing industries, often but not always located in India and China, which are trying to “monetize” academic research output.

At best, these are upstart companies attempting to break into the publishing and conference industries, by taking advantage of the online and open-source model. At worst, these are fraudulent sham operations trying to con unwary researchers out of article “publishing fees” and conference “registrations.” Regardless of which of those two extremes is closest to the truth, the rapid growth of this phenomenon has repercussions for anyone involved in academic research. Ignoring the most extreme cases in which people get swindled out of money by these practices (a law-enforcement rather than academic issue), I am concerned about the potential dilution of research into so many outlets with poor refereeing processes, making it ultimately difficult for authors, readers and university administrators to discern the quality of published results.

One particular vulnerable group, in my opinion, is people just starting their research career. They are often less savvy and under more pressure to publish or present their work, and might therefore be tempted by some of these solicitations, especially if they come with a veneer of respectability. In the short term, this problem can be greatly alleviated through careful mentoring of junior researchers by their more senior colleagues, which is something the IMS should encourage.

As researchers, it also behooves us to refuse to lend our reputations to these organizations. As the New York Times article noted, there is a steady stream of invitations to join “editorial boards,” which might seem like a fairly harmless way to add an appointment to one’s resume. I also know of at least one case in which a well-regarded researcher was invited to write a “peer-reviewed” paper for one such journal, and was offered a $1,000 fee for this. This was for a journal that normally charges hefty publishing fees, so the goal was clearly to try to jump-start the journal’s credibility and fend off criticisms of the level of articles it publishes. The “peer-review” consisted of a few sentences stating that the submission was very well-written and that it was accepted for publication. While an editorial board appointment with no or minimal workload or a publishing fee might seem like good deals, I have no doubt that in the long run, we are harmed as a discipline if we allow the line between legitimate and “pretend” peer-reviewed research to blur.

Publishing high quality journals and hosting top-level scientific conferences are two core activities of the IMS. Institutions such as IMS but also universities and libraries should become more engaged in pushing back against this rapid growth of alternative outlets with loose standards of quality. Some suggestions in that direction are the coordinated development of white- and blacklists, a common policy of refusal to link to materials from clearly predatory organizations and a clearer articulation of standards for what it means for articles to be peer-reviewed. No small tasks for sure, and in the meantime, I’ll continue adjusting my spam filters to be on the lookout for key words such as “Hindawi,” “Mehta Press” and “iiisconferences.org.”

Krzysztof Burdzy feels that this is not something for IMS members to worry about. He responds:

I applaud the New York Times article because it provides a public record of a significant social issue—predatory pseudo-scientific journals, conferences and related practices. I do not think that “predatory” journals are much of a problem for researchers. They are a problem for administrators and, therefore, they are of limited interest to IMS members.

Science was created at different quality levels long before the advent of the Web and globalization. Second- (and third-) tier scientific journals existed when I first visited an academic library in the 70’s. I bet that they existed much earlier than that. Second-tier journals played, and still play, a useful role. There are a lot of people who “do science” for a living, and not everybody was born an Einstein. All scientists should have a chance to publish their articles, as long as the articles conform to the formal standards of the given field (for example, mathematical papers must be based on rigorous logic). Articles which are not very exciting will naturally end up in second-tier journals.

I do not know and I have not heard about any scientist who published a paper in a second-rate journal by mistake. There are very few people, if any, who have a PhD but cannot tell the difference between journals such as the Annals of Probability and Annals of Statistics on the one hand, and second-rate publications—printed or electronic—on the other hand. If you go to a restaurant or buy a car, do you believe that all restaurants serve food of the same quality, or that all cars are equally reliable? Why would you expect all scientific journals to be of the same quality?

Science is a component of the general culture. There are “predatory” services, which charge money to record songs for aspiring composers, and there are publishers, which publish (fiction) books for a fee paid by the author. The trend now extends to science.

Predatory and, more generally, second-tier journals, pose a real problem for science administrators. Some administrators used to evaluate researchers by counting their publications. In response, some researchers learned how to inflate their publication lists with low quality papers published in second-rate journals. Then some administrators tried to improve their evaluation methods by not only counting papers but also taking into account the “impact factor” of the journals. In response, second-tier journals learned how to inflate their citation rates in an artificial way. I do not have any good advice for administrators. Personally, I doubt that one can automate the evaluation process of researchers, but I will let administrators worry about that.

In conclusion, I am not worried. The probability that junior (or senior) members of the IMS will publish their significant results in second rate journals by mistake is close to zero.

What do you think? Add your comments below to continue the debate…