EmilieMenzel_A4

Academics Comparison

After spending too much time searching the impact statistics of friends, family, and former classmates, I decided to compare two literature scholars: Helen Vendler and Lisa Rodensky. Helen Vendler is an incredibly prolific and well-known American and British poetry critic who has been teaching at Harvard since 1981. Lisa Rodensky is a well respected literary scholar of British Victorian novels, and works at the liberal arts college Wellesley where heavier emphasis is put on teaching rather than publication. I was curious how two respected scholars with a wide distinction in number of publications in (broadly) the same field would compare across Scopus, Web of Science, and Google Scholar’s bibliometrics.

On Scopus and Web of Science, both Helen Vendler and Lisa Rodnesky had one author entry with their correct locations and universities noted. Neither Vendler nor Rodensky had Google Scholar profiles, and as such I could only gather information based on search matches with their names (though, sorting through the results lists, the results pulled from only an author name search seems accurate). All sources accurately pulled publications attributed to the authors, meaning that while there were publications missing, there were not publications attributed to the authors that were not their work.

On Scopus, Vendler was attributed as the author of 19 documents with an h-index of 5 and 75 citations from 70 sources. 40% of Vendler’s citations were from reviews (I am presuming reviews of her books) and 34.4% from articles. Rodensky was attributed as the author of 2 documents with an h-index of 1 and 4 citations from 4 sources. 50% of Rodensky’s citations were from reviews, 25% from both books and articles (given the small sample-size here, though, I question how much information this really gives). From having worked with these scholars’ research, I know that both authors have published far more material than is listed here. Rodensky seems particularly underrepresented here. To secure positions at either Harvard or Wellesley, both scholars would need field-impressive publication records, and the numbers in Scopus don’t seem to offer much backing in either quantity of publications or quality.

Both authors were more represented in Web of Science. For Rodensky, this was only a slight increase: Web of Science identified 4 more publications, 2 more citations of hers, and her h-index was listed at 2. I noticed that two of her publications accounted for all of the citations, and these were the two publications listed for Rodensky on Scopus. For Vendler, the jump in listed documents published was enormous between Scopus and Web of Science, Web of Science listing 285 publications, 201 citations from 151 sources and an h-index of 6. I noticed that Web of Science, unlike Scopus, listed publications from “popular” journals like the New York Review of Books, and Vendler has published widely in public-facing journals such as these.

On another note, I appreciated Web of Science’s timeline graph overlaying publications and citations. Vendler’s Per Web of Science’s Citation Analysis, citations of Vendler’s work has been dramatically on the rise since 2010, though Vendler’s publication rate has remained largely steady since she began publishing in 1974. I’m curious as to what caused this shift, one possibility I considered being that the students she taught as a young professor were at this point getting jobs and publishing themselves.

As mentioned above, neither author had Google Scholar profiles, and as such I could not view total counts aside from listed publications. Still, the difference in publication numbers between Google Scholar and the previous two websites was remarkable. These humanities professors’ work was by far best represented on Google Scholar. Vendler’s name pulled 513 publications with her most frequently cited work receiving 703 citations. Rodensky’s name pulled 22 publications with her most frequently cited work receiving 115 citations. Such numbers are much more in line with what I would expect from esteemed humanities faculty. Partly, I think this numbers jump is due to that Google Scholar, more than Scopus or Web of Science, included the names of monographs in addition to published articles. For Vendler, I also noticed that she was given publication listings for both her full books and each of the chapters in her books. Listing individual chapters as publications makes sense for less monograph-focused fields, where academics will frequently publish a single chapter in an edited collection. Still, this suggests Google Scholar’s numbers are actually a little high for humanities scholars vs the lowball of Scopus and Web of Science.

Articles Comparison

I looked at the impact metrics for two articles by Tracy Gleason, a psychologist and professor at Wellesley College (guess where I went to school) focused on researching social imagination in children.

The first article, “Parasocial interactions and relationships in early adolescence,” was a co-authored paper with S. Theran and E. Newberg, published in the open access journal Frontiers in Psychology in 2017. According to Scopus, this article has been cited 19 times (within Scopus). The article has a field-weighted citation impact of 1.09, meaning that Scopus finds this article is cited “more than expected” given its year of publication and field. Despite the citation impact score, Scopus notes the article as in the 73rd percentile for citations. I was very interested to see that over half of this article’s citations are from this year. As with Helen Vendler, I’m wondering if this uptick has to do with when Gleason’s students are graduating and publishing their own work.

In alt-metrics, this article received an overall score of 48 (the meaning of which is a little unclear). Whereas Scopus puts the article in the 73rd percentile, alt-metrics notes it as “in the top 5% of all research outputs scored by Altmetric.” The citation count in alt-metrics is not remarkably different from that in Scopus with 23 citations, so this greatly different interpretation of these numbers by the two organizations is surprising. The article was tweeted about 14 times; over half of these appear to be spam accounts or simply a tweet with only the link to the article. I wonder here about how much tweet counts tells about meaningful engagement with information.

The second article of Gleason’s I examined was “The evolved development niche: Longitudinal effects of caregiving practices on early childhood psychosocial development,” published in the non-open access journal of Early Childhood Research Quarterly in 2013. Gleason co-authored this article with five other researchers. In alt-metrics, this article received a much lower score of 11 (top 25%), even though the number of citations listed was higher (37) than that of the first article and this research had been cited by news outlets like Psychology Today and in the Wikipedia article for evolutionary biology. Perhaps the score is an effect of the extra four years since publication (as compared to the first article) and the three additional authors listed on the publication. Still, at least from the sources picked up by altmetrics, I would subjectively label the engagement with the second article as more meaningful than that of the 14 half-spam tweets.

In Scopus, the second article was listed as falling in the 93rd percentile with 41 citations. Scopus gave this article an impact index of 3.03, significantly greater than that of the first article. Interestingly, Scopus lists the article as only having 13 views in Scopus, suggesting the readership of this field is discovering their sources elsewhere. Again, citations peaked about 3 years after the initial publication of the research.

I found intriguingly contrasting interpretations of citation counts and quality between Scopus and Altmetrics. Whereas Scopus noted the second article as the more successful, Altmetrics identified the first as in the top 5% of all articles for the field. I am considering how a researcher might weight these results, and whether co-authoring and collaboration is awarded, neutral, or countered in these bibliometrics. Is it more beneficial for their bibliometrics (which might then be used in applying for tenure) for an author to work alone and receive overall less reach for their research or for an author to work with a group and receive overall higher reach? I wonder about the social impact of single-authorship, whether there is a benefit in perception of the author’s independence and competence and that this is what is reflected in the bibliometrics.