Assessing Impact in Open Access and Conventional Journals

Recently, a lab mate of mine presented a paper at our weekly research meeting. She ended her analysis saying that she was skeptical of the methods and the rigor of the results because it came from the open access journal PLOSOne. This got me thinking and realizing that even though scientists are constantly demanding data and literature references — which open access platforms could easily provide — the stigma is that only big named and familiar journals are reliable. But is there a solid correlation of reliability and reputation? The answer is surprisingly: not really. However the culture of science still heavily relies on prestige with the stigma of if you don’t have a big publication, you probably don’t do good science.

Here, I go through how journals are evaluated and problems with prestigious papers and why they may not be reliable. Keywords: impact factor, publication bias, open access.


Impact Factor

Current Calculation of Standards

Every field has certain journals that are considered to be prestigious and influential, even outside of science in history or medicine for example. All journals and articles are evaluated most often it is referenced in other articles termed the Journal Impact Factor (JIF). It is calculated by the number of times it is cited divided by the total number of articles in that field. Say you wrote a paper about enzymes and five others cited it and there were a thousand biochemistry papers, you’d have a JIF of 0.005. Citations are used as the standard of measurement  because it is can be determined objectively by counting the number of references. Relative to other metrics of judging how much a paper has affected the field, such as length of the article or which journal it’s originally published in, citations are a better way of measuring how the paper’s content has percolated the field.

With that said, I feel like it can be very hard to calculate because it requires a knowledge of all the papers within a given field. Often times, it’s divided up into smaller fields — using our example from before, maybe there were only 50 papers about enzymes this year so your impact factor increases to 0.1. A big drawback of measuring impact in this way is that often times it takes a while for a paper to really be appreciated for its novelty and content. For example, it took Ed Lewis, a Nobel Prize Winner, six years to reach peak citations.

Effect on Publishing Environment

The majority of scientists think that only big name publishers relay correct papers to such an extent that getting their paper in such a journal is the measure of successful science and the only validation of work. As such, having a paper in the limelight of a prestigious journal can make or break a postdoc’s career and can make the difference between employment and unemployment for a tenure professor at a research institute. That is the stigma and social influence these journals have on the scientific community.

But how does this apply to the existing science journals? The two more popular science publication outlets are Science and Nature, but there are a substantial amount of smaller still reputable organizations, such as the Journal of Biochemistry or the American Chemical Society, that have a more select focus. In these names, we may start to see the aesthetic draw of publishing research in a journal that only shows findings that say about how the world works, and only displays the best of all the fields. There is an element of having your years’ worth of work validated when your paper about enzymes is the creme de la creme of science.

Given this climate of assuming big named journals with their rigorous peer review process and strict requirements paves the road to good science, any other system might seem less trustworthy. As such, open access journals, such as PLOSOne are held in even less esteem and publishing or citing them is traditionally not encouraged.

Let’s now try to look at what all of this means to get a visual and mathematical understanding of the gaps between big named journals and open access publications. Below are some figures and data from a pre-publication comparing journal distributions and JIFs from 2013-14. (I trust the data in is because it was discussed at an OpenCon, an open access conference without any disagreement.)

screen-shot-2016-11-13-at-4-28-00-pm screen-shot-2016-11-13-at-4-28-35-pm  

Journal JIF (2013-

2014)

% items below 2015 JIF
Science 34.7 75.5%
PLOS ONE 3.1 72.2%

Even though PLOSOne has a comparable amount of publications (top figure), they are sited significantly less than articles from Science and have an order of magnitude less impact (table, middle column). It’s important to note that the number of citations for a publisher is very large compared to the number of articles which is why we get whole numbers here where we did not with our enzyme paper example in the first section.

Other than JIF, the measure of lasting impact can give some perspective to how the journal continues to have an effect on the field. By seeing how many of the articles from 2013 and 2014 compared to the average JIF of 2015, we can get an understanding of the rate of citation over time. In the third column, we see that articles from Science and PLOSOne both fall below the 2015 average by the same amount suggesting that, overtime, both publications have the same lasting impact on research. It also shows that the science constantly moves forward without sticking to the past since citation rate of a paper drops significantly over time. But more importantly, it shows that big named publishers are not necessarily associated with importance or scientific rigor.

Problems with Prestige

Publishing False Positives

The demand of surprise and innovation in a publication can lead to authors to feel pressured into making choices about their work to imply novel findings. This can lead to the publication of false positives, when the results incorrectly indicate the presence of something that’s not really there. Usually, this happens when the test conditions were not setup properly (thus yielding the wrong signal) or with poor/biased data analysis. This is demonstrated in the observation of what’s called the decline effect, where the strength of evidence for a certain finding becomes more uncertain and less likely over time suggesting that the first publications were indeed false positives.

Another way to frame the problem of overhyping positive data is using a term from economics called The Winner’s Curse. This symbolizes the event when a winner at an auction, not knowing the value of an item, tends to overpay for it and continues this habit. It translates to science by exchanging the winner with the publisher, the data with the item being auctioned, and the overpayment with the exaggerated results. The impact of placing overestimated and extravagant results in the headlines of journals with greater prestige  leads to other damaging effects on science itself.

Notably, the initial reasons of using high impact factor as a metric is not necessarily correlated with reliable good data and results. Recalling the decline effect, the results of publishing potential false positives can lead to a large number of retractions. Data collected in 2011 demonstrated that the correlation between journal impact and retraction rate had a coefficient of 0.77 where 1 is the maximum.

Journals analyzed were Cell, EMBO Journal, FEMS Microbiology Letters, Infection and Immunity, Journal of Bacteriology, Journal of Biological Chemistry, Journal of Experimental Medicine, Journal of Immunology, Journal of Infectious Diseases, Journal of Virology, Lancet, Microbial Pathogenesis, Molecular Microbiology, Nature, New England Journal of Medicine, PNAS, and Science (Ferric C. Fang, and Arturo Casadevall Infect. Immun. 2011;79:3855-3859)
Journals analyzed were Cell, EMBO Journal, FEMS Microbiology Letters, Infection and Immunity, Journal of Bacteriology, Journal of Biological Chemistry, Journal of Experimental Medicine, Journal of Immunology, Journal of Infectious Diseases, Journal of Virology, Lancet, Microbial Pathogenesis, Molecular Microbiology, Nature, New England Journal of Medicine, PNAS, and Science (Ferric C. Fang, and Arturo Casadevall Infect. Immun. 2011;79:3855-3859)

Smaller journals, like Journal of Biological Chemistry or Molecular Microbiology, are clustered toward the bottom left and have significantly less retraction rates than Cell or Science. The journal with the highest impact, New England Journal of Medicine (NEJM), also has the highest retraction index. I think this is because medicine and the medical practices are hugely important in influencing how surgeries are performed and health advice given. With studying people, however, there come a lot of variation and confounding factors that make diagnostics very difficult and prescriptions universal. Plus the long term effects of a treatment are not known until years later.

It is important to note that the correlation may be confounded by the decreased reliability of high ranking journals from the pressures to publish data. Because these journals have the higher impact factor, they are looked at and read more often which increases the likelihood of an error being detected.

 

Slowing Scientific Progress

Publication in selective journals suggests some sort of extra value that may not really reflect the research or the author.Accepting the content as valuable also leads to a sort of ‘follow the leader’ behavior where researchers in the field are influenced by the paper and it’s methods and drop their more unique or genuine pursuits for ones the paper describes. It also leads to a positive feedback loop where articles written for high-ranking journals (assumed to be highly selective and very important) are cited over and over again.

Despite all of this, the focus on journal rank still inspires scientists to have their work published in these journals to feel as though their work has been validated by a prestigious scientific community. Because the process of getting a paper accepted is a long and grueling process, when a paper does not meet the standards, it will be rejected and instead of starting over by submitting to a different publisher within the field, authors might continue to edit until it’s accepted. Regardless of what the journal is, many iterations of revisions and consultation with panels of reviewers need to occur before publication, halting scientific advancement.

Publishers of high ranking journals can make their content seem more valuable by increased subscription prices and having articles be closed access. Institutions and publishers, Peter Lawrence says, have become too obsessed with bureaucracy at the expense of research. A lot of current research is being funded by the NSF or the NIH, both organizations that are funded by taxpayers, but still so much of the content is hidden away behind $40 access requests that only last 24 hours and cannot be shared. Public research institutions, too, still have to pay for yearly access to journals and without these subscriptions scientists and students would not be able to do their work.

Conclusions & Moving Forward

Overvaluing monolithic paywalled publishers, as demonstrated, can do more harm to science than good. Such publishers do not openly share content with the public and other researchers nor does an article being published in one necessitate long term impact. Because they are so glorified, they can also inspire the limitation of research paths taken. Most importantly the rush to publish and the pressure to have novel findings can lead to incorrect data being published that has to be retracted later. Science, the systematic and objective study of how nature behaves, should not be judged or gauged by the name of where experimental results are published for such a metric does not relate back to the quality of work.

Instead of wanting to publish in Nature or Science, having an article in the Journal of Biochemistry or the American Chemical Society content to a more targeted and specialized audience who would be able to do more with the results found. Such journals do not have the same pressures to publish articles that have obvious applications and therefore has results that are less biased. Even so, there exists the problem of subscriptions and peer review that all traditional journals have. Perhaps, this system beyond just the stigma surrounding publishers, needs to be altered.

Even though development needs to be made with Open Access journal publication and peer review, it can provide a lot of value to the process of publishing and the platforms for it are steadily growing. It has comparable lasting impact and provides peer reviewed content that can be trusted provided it’s a reliable journal (which I have talked about here). In general, it should be the content —  the methods, logic, and conclusions — of a paper that are the most important. To change the stigma of publications, scientists and researchers need to realize that they should never exchange fame for quality in their work and instead want to further scientific progress. As open access platforms grow, both in success and in recognition of its capacity to help science, the stigma around journals will hopefully fall away and science will improve drastically.


I’d like to thank my boyfriend and fellow open access advocate, Aidan Sawyer, for editing this article, and most of my other work. Also my friend from RIT, James Sinka, for sending me this paper that helped me compile a lot of this information.

Want more frequent science and open access news? Follow me @annotated_sci on Twitter!

Leave a Reply

Your email address will not be published. Required fields are marked *