This last lesson provides you with some tips about the journal publishing industry that might be pretty useful to know about if you go on to graduate school – but also for your career. For this lesson to make sense, you may want to first review some basic information journals from the Information Literacy Basics tutorial.
A publisher is a company, society, or organization that accepts manuscripts from authors, provides editing services, and manages the layout, printing, web-hosting, advertising, and distribution of the source in its finished form. There are different types of publishers that specialize in producing different types of sources. For example, the New York Times Company is a newspaper publisher. Plus, there are magazine publishers, such as Conde Nast, and book publishers, such as Penguin Books. There are also publishers that produce journals and you'll read about four different types of journal publishers below. Note that many publishers produce more than one type of source; for example, the publisher Elsevier produces both journals and books.
Journal publishers manage the following details - and more - in the journal publication process:
ensure there is an editorial staff for each journal they publish
accept and process journal article manuscripts from authors
organize and managing the editing process for manuscripts - including peer review and copy editing
Most of this work is done by staff at the publishing company, but the work of peer-review may be done by outside scholars rather than employees of the publishing company.
Journal publishers develop new journals when they perceive there is a market. For example, before the discipline of nanotechnology emerged, there were no nanotechnology journals. But as this discipline came into existence and grew, publishers created nanotechnology journals for researchers in this field to publish in.
Company publishers operate for a profit. Many of them produce scholarly books and/or conference papers in addition to journals. Company publishers that specialize in science and applied science journals include Springer, Elsevier, Wiley, and SAGE.
Society publishers are associated with professional organizations or societies, and they usually produce several different types of sources besides journals, just like company publishers. Society publishers that specialize in science and applied science journals include the Institute of Electrical and Electronic Engineers (IEEE), the Association for Computing Machinery (ACM), and the American Chemical Society (ACS).
Most publishers recoup their costs and generate profits by charging fees – such as subscription fees – to anyone who wants to read content published in their journals. But open-access publishers operate under a different economic model. In this model, a payment is made up front to get the manuscript published. This payment is called a publication fee or an article processing charge and it's often paid out of the author's research grant funding or with funds made available by the author's employer. Once the manuscript is published, anyone can access it, because content in open-access journals is free for anyone to read without paying a subscription. Most open access publishers specialize in producing journals. Examples include Public Library of Science (PLOS) and BioMed Central (BMC).
Predatory publishers produce predatory journals, which are not legitimate journals. Predatory publishers prey upon faculty at academic institutions who are anxious to author more publications. Why are faculty anxious about this? Most academic employers regard publishing as a measure of a faculty member's productivity, so publishing very few articles is equivalent to being unproductive and could lead to losing your job. If you've heard the saying "Publish or perish!" that's what it refers to.
Predatory publishers send emails to faculty at academic institutions encouraging them to send in article manuscripts and promising quick acceptance and publication of the author's manuscript. The author must pay a publication fee to get their article published, just as they would with an open-access publisher, but that's where the similarity ends. Predatory publishers are just interested in collecting publication fees from authors who are eager to increase their number of publications. Predatory publishers don't bother to ensure that the articles they publish are high-quality or even valid, in contrast with legitimate publishers who employ teams of editors and utilize a legitimate peer-review process to ensure quality.
Now that you are aware of predatory journals, you'll probably want to know how to identify them so you can avoid them! Here are a couple of ways to check if a journal is predatory:
1.Run a Google search on predatory journal lists. You'll get links to several lists in your results; just check any of those lists to see if the journal you are wondering about turns up on any of them.
2. Try a Google search on the name of the journal along with the word predatory to see what you can learn about it. It's usually best to put the name of the journal in quotes in your search.
Suppose you are searching Google Scholar and you come across an article that you are suspicious might be from a predatory journal. You can try looking up the article in a database like PubMed or Web of Science to see if it appears there. If you find it there, you can feel confident it is not an article from a predatory journal. If you don't find it there, that doesn't mean it is from a predatory journal, but you may want to investigate it further using the techniques explained above, or by asking for the opinion of an expert in your field.
If you search databases like PubMed, Web of Science, Compendex, CINAHL, as well as many other databases found on the Cline Library's website, then you'll avoid retrieving results that are from predatory journals. The producers of these databases try to ensure that content from predatory journals is excluded, so that predatory articles won't turn up in your results. In other words, these sorts of databases are curated to ensure they have quality content.
Google Scholar, on the other hand, is not curated. So, if you search Google Scholar, you may find predatory journal articles in your results. If you come across a Google Scholar result and need to check if it is from a predatory journal, then use the instructions in the section above.
And now we'll move on to discuss certain types of statistics that are collected to assess the scholarly impact that an individual journal article - or an entire journal - has on the research community. One of these statistics measures the number of times an article has been cited, which we refer to as the article's citation metrics. Before we get started talking more about citation metrics, if you don't feel all that familiar with citations and citing, then you should first look at Lesson 7. Using and citing information sources in the tutorial Information Literacy Basics.
The number of times an individual article has been cited by other authors in their publications constitutes that article's citation metrics. When an article has been cited a lot - that is to say, it's a highly-cited article - it's usually because the content and findings have been pretty useful to lots of other researchers, otherwise lots of other researchers wouldn't have cited it. So, in general, it's considered prestigious for an author to publish an article and then have their article cited a lot.
Keep in mind though, that if an article is highly-cited, it might not be because it is high-quality and/or especially useful. It could get cited a lot because it tackles a controversial or contentious subject. Furthermore, sometimes articles that are terribly flawed get cited a lot as examples of failures (or, how not to do research). So, a highly-cited article is not always an excellent article; nevertheless - in general - a high citation count has come to be an indication of prestige.
Citation metrics for articles change over time. When an article is first published, it'll have a citation count of zero until other researchers read it, use its contents, and then cite it in their publications. As time passes and more researchers cite the article, the article's citation count will go up.
There are some databases that provide article citation metrics. Those that do include: Google Scholar, Web of Science, and Scopus.
You've learned that not all journals are equal, some are legitimate, while others are predatory. But for all those legitimate journals, there's a whole spectrum ranging from those that are extremely prestigious to publish in, to those that are lower quality and less prestigious. The prestige of a journal is based on how often the articles published in that journal are cited. So, journals containing articles that tend to be cited a lot have a higher metric (and thus more prestige) than journals that contain articles that are cited less.
This metric has a name - it's usually called the journal impact factor but it can go by other names depending on who is calculating the metric and how it is calculated (other names include CiteScore, SCImago Journal Rank , Eigenfactor, etc.). A journal's impact factor is periodically recalculated and adjusted. So, over time, if the journal starts publishing articles that are cited less, then the journal's impact factor (and thus prestige) will drop.
Journals with a high impact factor have editorial teams who are very choosy about the articles they accept for publication. These editors try to only publish articles that look like they will be especially groundbreaking, useful, and influential - and thus will get cited a lot. That way they can ensure the impact factor for their journal stays high and doesn't drop.
But how is it that these prestigious, high-impact journals can be so choosy and publish only the best, most interesting articles? This goes back to the publish-or-perish imperative. For faculty to keep their jobs, get promoted, and/or be competitive for other jobs, not only do they need to publish a lot, they also need to publish in the most prestigious journals possible. So, if you are in academia, your reputation, job prospects, tenure and promotion prospects, and salary can depend not only on how much you publish, but also where you publish.
Since everyone wants to boost their career prospects by publishing their work in the most prestigious journals, those journals get thousands of article manuscript submissions and can be very choosy and only publish manuscripts that look like they will be cited by lots of other researchers. So, prestigious journals can select and publish the highest-quality and most groundbreaking articles. You might say these journals have a monopoly on publishing the best articles.
You just learned that prestigious journals can select and publish the highest-quality and most groundbreaking articles, which gives them a monopoly on publishing the best articles. Unfortunately, monopolies offer opportunities to exploit. Since prestigious, high-impact journals publish lots of high-quality, useful articles, it's very important for academic libraries to subscribe to those journals because everyone wants to read the articles published in them. So, publishers will charge very high subscription rates for their most prestigious journals, knowing that libraries will feel a lot of pressure to subscribe to those journals, no matter what the cost.
But it's not just that. Because academics must constantly strive to publish in the most prestigious, highest impact journals, those journals start take on their own mystique or 'academic glamor'. In other words, they become fashionable to publish in. This means the publisher can ratchet up the subscription price even more, knowing that academic libraries will keep paying, since the journal has so much power and sway in the academic world.
Basically, when you rank the accomplishment and value of scholars on how much they publish and where they publish, you create a market for prestigious, over-priced journals that return enormous profits to the publisher. In fact, journal publisher price-gouging has resulted in a couple of developments:
1) The open access publishing movement started as a reaction and defense against journal publisher price-gouging.
2) Predatory publishers came into existence when they saw they could copy the business model of the open-access publishing movement.
All the journal metrics tools listed below calculate journal impact metrics in slightly different ways; but for all them, the calculation is ultimately based on citation counts of articles published within the journal.
CiteScore - This tool provides an impact metric developed by the company Elsevier.
Journal Citation Reports - The first and oldest tool for finding a journal's impact factor is called Journal Citation Reports. NAU's Cline Library does not currently subscribe to this resource.
Eigenfactor.org - This tool is free to use. You can just enter a journal title to look up it's Eigenfactor Score.
SJR (Scimago Journal Rank) - This tool is also free to use and is shown below.
Author metrics are designed to measure an author's productivity and influence by running a calculation that simultaneously looks at the number of articles they've published along with the number of times those articles have been cited. The most commonly used author metric is the h-index and you can read all the nitty-gritty details about how it is calculated here or look at a fairly simple explanation of the calculation here. Another, similar statistic is the beamplot.
Web of Science - This database will calculate an author's beamplot. Just conduct a search for the author you want to investigate.
Scopus - This database provides a free tool to look up an author's h-index.
Google Scholar - You can freely search Google Scholar for an author's h-index but it'll only be fruitful if the author has a Google Scholar profile. An example is shown below.
There's another type of metric you should know about, and that's altmetrics. The term altmetrics is a shortening of the phrase article-level metrics, but it might make more sense to think of as a smushing together of the words alternative metrics.
Altmetrics don't rely on traditional article citation counts or other metrics that are based on citation counts (such as journal impact factor scores, eigenfactor scores, or authors' h-index scores).
Instead they attempt to capture the impact of an article, author, journal, website, etc. based on prevalence and popularity in social media, by looking at things like:
number of downloads
mentions on Twitter, Facebook, etc.
being discussed in the news media
number of uses as a citation in Wikipedia
being bookmarked on social reference managers like Mendeley
As you've learned, metrics like the h-index are used to directly judge a particular researcher and their output. The citation metrics used to judge the value of an article are also ultimately about the researcher - since a researcher wrote the article! Similarly, journal impact factors are only meaningful in terms of the researcher, since this metric is used to judge where researcher's publish their work.
So, all of these metrics together are used to make tenure, promotion, and hiring decisions. To depend on - or partially depend on - metrics for these sorts of decisions is contentious for lots of reasons, but here are just a few:
Metrics may not be measuring value, utility, and impact particularly well- or they may not be providing a very accurate picture of how researchers measure up to each other, particularly across disciplines.
Metrics can be gamed. For example, journal publishers have lots of sneaky tricks to boost their journals' impact factors and there are several ways authors can boost their own citation counts or appear more productive by publishing lots of short articles.
Certain types of research projects that could be really valuable may not be conducted or published because the research isn't likely to results in lots of citations.
So, it's good to know that metrics, while interesting, are both controversial and contentious!