Whether you're reviewing the literature and reading journal articles, writing and publishing your research in a journal, or assessing your CV and research career progression, understanding how to evaluate journals and research impact is crucial.
Photo by Zetong Li from Pexels: https://www.pexels.com/photo/view-of-rows-of-bookshelves-in-a-college-library-16689057/
Bibliometrics is a field of study that uses statistical analyses to study publication patterns, often in service of measuring the impact or influence of academic publications.
These analyses are typically done at three levels:
Journal-Level Metrics: evaluating the impact of academic journals. A common metric at this level is Impact Factor.
Article-Level Metrics: evaluating the impact of a specific paper. Citation counts or altmetrics are often used for this.
Author-Level Metrics: evaluating the productivity and impact of an individual scholar's output. One way of calculating this is with the h-index.
Proprietary: This metric is published annually by Clarivate, an analytics company, using data from its subscription database, Web of Science.
Popularity: JIF is commonly used as a proxy for research quality, though this has invited criticism.
Criticism: JIF is designed to compare journals, though it has been criticized for being misused as a quality marker of individual articles or researchers.
One of the most commonly used ways to illustrate the quality of a journal is its Journal Impact Factor (JIF), a metric that measures the amount of citations a journal receives relative to the number of articles it publishes. The formula for the impact factor of a year, y, shown below, calculates the ratio between the number of citations received for publications from the two preceding years, and the total number of citable publications published in that journal during the two preceding years:
Not necessarily. Check out this video to learn about some of the nuances of Journal Impact Factor.
Beyond quantitative metrics like impact factor, students and researchers should be aware of qualitative indicators that may signal a journal is predatory. Predatory journals are entities focused on profiting by charging authors publications fees, often without providing proper peer review or editing. Publishing in these venues is risky for researchers, potentially leading to concerns regarding the legitimacy of their work and possible damage to their academic reputation.
Photo by Jan van der Wolf: https://www.pexels.com/photo/blue-sky-around-red-flag-19001404/
Is the journal indexed in major databases like PubMed, Web of Science, or the Directory of Open Access Journals (DOAJ)?
Are the editors on the editorial board active scholars in the field? Check their institutional affiliation and publication history.
Is the peer review process detailed and transparent?
Is the journal published by a major university press, respected society, or one of the big commercial publishers?
Aggressive solicitation of papers
Homepage language targets authors, not readers
Short peer review turnaround times
Very broad scopes
Questionable impact factors or different metrics like “Global Impact Factor”
Be sure to verify impact factor claims through Journal Citation Reports or Web of Science
Listed on watch lists like Beall's List of Predatory Journals
While the Journal Impact Factor is an important metric for measuring the impact of a journal, it should not be assumed that individual articles within that journal, or authors who publish in that journal have corresponding impact. To measure the impact of an article or an author, make sure you're using appropriate metrics.
The most common way to measure the impact of an article is to check its citations, or to see how many times it has been cited in other works after it's been published. However, not everything can be measure in citations, so altmetrics, or alternative metrics, look for other indications of engagement with the work, like downloads or social media mentions.
The most common way to gauge an individual researcher's overall influence is through the h-index, which balances the number of publications with how frequently those works have been cited. Another simple metric, often used by Google Scholar, is the i10-index, which counts only the number of an author's papers that have received at least ten citations. While useful for quick comparisons, these metrics generally reflect higher values for established researchers and should always be considered within the context of the author's specific academic discipline.
Check to see how well you can evaluate journals and determine research impact.
To take this quiz for credit, access the link while signed in with your UH credentials here.