CadeCarlson_A4

I have selected Helen Tibbo of UNC Chapel Hill and Jin Ha-Lee of University of Washington, Seattle as my two authors to gain inference in how SCOPUS, Web of Science and Google Scholar present information regarding their impact. The two articles I have selected for analysis of how SCOPUS and Altmetrics present information on their impact are “Modeling the information-seeking behavior of social scientists: Ellis’s study revisited” by Meho and Tibbo, and “Why Video Game Genres Fail: A Classificatory Analysis” by Clarke, Lee, and Clark.

SCOPUS by Author:

When utilizing SCOPUS, I searched for Helen Tibbo and was returned one search result that aligned with their credentials (name and institution). When clicking the “analyze author output” option, I was directed to a webpage that displayed different visualizations of data related to their scholarly output from 1989-2021. These various visualizations were: Documents by Source (pie-chart), Documents by Type (donut-chart), Documents by Year (line-graph over time), Documents by Subject Aera (pie-chart), h-Index (line-graph over quantity of documents), Citations by Year (bar-graph), and Co-Authors (list). This presented a lot of qualitative data on the author’s scholarly output, illustrating trends over time of not just the number of documents that were published with them contributing, but also how often they were cited by year, where their output was published, as well as the h-Index score (their score being 13, indicated that across the 47 documents that were cited collectively 675 times, 13 of their documents were cited 13 or more times).

When using the Citation Overview option, I was returned a line graph with time (years) as the x-axis, and quantity of citations as the y-axis. The range was initially narrow, and after adjustment limited to be across 15-years, so I entered a range of 2006-2021 in order to get the widest representation that was most contemporary. There were options to “exclude self-citations of selected author”, “exclude self-citations of all authors”, and “exclude citations from books”, all of which could be toggled individually as a means to bypass potential skewing of data from self-citations. Their output was also illustrated through a list of the published documents that indicated their title, year of publication, and quantity of instances where they were cited for each year, which can illustrate potential trends on specific documents being cited across time. This list can be particularly helpful in gaining data on specific documents where deeper inferences on what documents had the most potential impact through the sheer quantity of times they were cited over time, even illustrating if there were particular periods of time that the documents were cited.

When searching for Jin Ha-Lee in SCOPUS, I was presented with 2 results that matched their credentials, with the difference being that the name returned was either “Jin Ha Lee” or “Jin Ha-Lee”. Though there were different quantities of documents published listed (52 and 38 respectively), I selected “Jin Ha-Lee” with 52 documents for the sake of this exercise. With all the same data points and illustrations present across the “analyze author output” and “citation overview” as with the Helen Tibbo search results, it can be inferred that although Jin Ha-Lee has contributed to 52 cited documents (5 more than Helen Tibbo), their h-index score of 9 illustrates a slightly lower potential impact through their documents having been cited only 291 times. Granted, Tibbo’s quantity of published documents goes all the way back to 1991 while Lee’s only goes back to 2014, this can illustrate the acknowledged trend that the longer an author has been around, the more potential instances of them being cited goes up.

Web of Science by Author:

When searching by author on Web of Science, we are returned a webpage that has an overview of the author, their credentials, list of publications from the WOS core collection that indicates number of times a document was cited, with a side panel that displays metrics like h-index score, quantity of publications, sum of times cited, quantity of citing articles, and a bar graph that illustrates author position in their authorship (first, last, corresponding). There is also an author impact beamplot that is “built on a researcher’s articles and review documents over their career”, with a range for author’s publication percentile range and an individual median citation percentile point.

Clicking on the “view citation report” link take us to a webpage that further elucidates some metrics on the previous page with metrics on cited articles and times cited without self-citation, as well as an average per item times cited. Much like SCOPUS, there is graphical representation over time that measures quantity of publications and instances of being cited by years. Further like SCOPUS, there is a list that displays documents that the author participated in, with quantity of times it was cited by year, as well as an average per year for quantity of citations, and a total number of citations. There are many options for what order this information can be displayed (data, citations, usage, recently added, conference title, first author name, publication title).

Both Tibbo and Lee have similar metric scores (h-index of 8 and 9 respectively), despite Tibbo averaging 100+ more instances of being cited and even though both authors have published (according to WOS) similar quality of documents (Tibbo 38, Lee 39). Both the “Times cited and Publications Over Time” graph and Publication’s index are very similar to SCOPUS’ outputs, WOS seems to have an inherently more robust display regarding in the time frame they cover (WOS displays full range of years that an author published while SCOPUS is relegated to a maximum range of 15 years) and in options for ordering the index of published documents.

Google Scholar by Author:

When searching by author on Google Scholar, the search results gave back lists of published documents that the author contributed to with title, author(s), publication, 3 lines of the abstract, a save function, a cite function, a “cited by #” that links to a search result of these documents, “related articles”, and “all # versions” that links to a list of various versions of the document that are available. The panel on the left presents options to further refine this search by time period, relevance/date, document type, whether to include patents or citations (citations is checked), and an option to create a Google alert for whenever novel documents are published with the author associated with them. When searching for Jin-Ha Lee, the first result was a link to a user profile that illustrates their credentials and quantity of times they were cited, while searching for Helen Tibbo did not return a link to a user profile.

When exploring Jin Ha Lee’s user profile, a list of published documents is presented that illustrates the year of their publication and quantity of times that they were cited (list can be sorted by these two criteria). There is also a bar graph that illustrated quantity of citations over time that can be expanded. Above this graph are 3 data points: Citations, h-Index score and i10-index score, where each of these data points are listed as “all” with an additional column illustrating these values “since 2016”, which enables a good snapshot of an author’s overall potential impact as well as a more contemporary snapshot. While the h-index score presented here is 25 (all) and 20 (since 2016) respectively, there is a clear discrepancy in these values when compared to the SCOPUS or WOS values. This could be due to how there were two version of Jin Ha-Lee’s name, and with there being a lack of a profile for Helen Tibbo on Google Scholars (I am not able to retrieve their h-index scores to compare to the other ones).

SCOPUS by Document:

When searching for “Modeling the information-seeking behavior of social scientists: Ellis's study revisited”, clicking on the “view all metrics” scrolls us down to data that reflects: quantity of times articles in cited in SCOPUS since its publication in 2003 (184); field-weighted citation impact score (2.92); view counts from 2012-2021 (108), in 2020 (10), and in 2021 (3), with an indication of when these values were last updated (May 18, 2021). Clicking on “more metrics” directs us to a webpage that enables a line graph of quantity of citations by year that can be visualized (limited to a 15-year span) with options to include all citations, exclude self-citations, or exclude citations from books. There is additional information like the “citation benchmarking”, which “shows how citations received by this document compare with the average for similar documents”, with this document placing within the 92th percentile (seems highly impactful when compared to the greater aggregate). There is also an explanation of the “field-weighted citation impact” score, which is described as “[showing] how well this document is cited when compared to similar documents. A value greater than 1.00 means the document is more cited than expected”, with a score of 2.92, this document is reinforced to have a higher-than-average impact overall.

When clicking on the PlumX Metrics, we are directed to a webpage that further granulates data on the potential impact of this document. Divided into 4 categories: Citations (185), Usage (2736), Captures (653), and Social Media (1); these categories are further broken down to illustrate even more granular data. Citations is broken into whether citations were listed on SCOPUS, CrossRef, Academic Citation Index, and Policy Citations. Usage is broken down into quantity of Abstract Views, Full Text Views, and Link-outs (all from EBSCO). Captures are broken down into Readers (from Mendeley data), and Export-Saves (from EBSCO data). Social Media only lists Twitter in this instance. These data points present much more granular data on not just the document’s potential impact through citations, but also through how readers behave with the document (abstract views versus full text views), even going so far as to display the tweets that the document was mentioned in.

When utilizing the same SCOPUS tools for the “Why Video Game Genres Fail: A Classificatory Analysis” document, it illustrates that this article was cited 28 times since its publication in 2017, with a citation benchmark within the 95th percentile and a field-weighted citation impact score of 4.0, arguing that this document seems to have a marginally higher impact despite its relative novelty when compared to the previous document’s publication date (a publication date difference of 14 years). The PlumX Metrics for this article illustrates interesting diversions in how this document is cited or interacted with: Citations (27), Usage (5113), Captures (187), Social Media (6). While it is cited less than the Tibbo document, but downloaded 4680 times (Usage), this paints a picture that infers that this document was read more than Tibbo’s, but this can be misleading, as the Usage data for the Tibbo document lacked the quantity of downloads. Both documents appear to be more impactful than most documents overall, but there is still an unclear image of just how much more impactful they are when compared to each other potentially due to the year of publication and how complete the data is.

Atlmetric by Document:

Utilizing Altmetric was a little underwhelming when compared to SCOPUS’ illustrative tools, but still presents data of interest to potential impact. It displays an “Attention Score”, which is described as providing “…an indicator of the amount of attention that is has received. The score is derived from an automatic algorithm, and represents a weighted count of the amount of attention that [Altmetic] picked up from research output”. It also displays “Mentioned by” for policy sources and twitter; quantity of citations, and “Readers On” which breaks down descriptive data on (in this case) Mendeley readers (Geographic location, demographics like readers by profession status, and readers by discipline). This provides insight into the nature of those that read the documents, which seems to illustrate more of which discipline or level of education a reader participates in in reference to the document itself. The citations metric links to a webpage (Dimensions) that, much like SCOPUS, lists the documents that the article in question is cited in.

The Attention Score can be broken down further as a means to compare how this document performs in refence to the greater aggregate (regarding all research outputs, outputs from the journal it was published in, outputs of similar age, and outputs for similar age in journal it was published in). The Tibbo article having an Attention Score of 4 compared to the Lee document’s score 2, with the additional data pointing towards the Tibbo document possessing a higher overall impact, illustrating that the types of data or measures can impact the reflection of just how impactful a document is (when comparing to the SCOPUS document tools/data).

Discussion:

While there were variations in types of data that these tools returned in relation to potential impact, there was a lot of overlap, specially with the use of the h-index score and graphical representation of quantity of times an author/document was cited. While there was a variation in the h-index scores, this can be due to the variation in quantity of documents that any of these platforms have in their system in relation to the author searched (the separate profiles for Lee could have also played a role in this). Although this variation in score is present, this data and others still paint a fairly consistent picture of how impactful the authors are. Given that both authors and documents selected for analysis were similarly impactful, I may have been able to gain additional inferences into how these tools operate if I had additionally searched for authors or documents that were cited less times for the sake of contrast.

Citations of Documents Used:

Clarke, R.I., Lee, J.H., Clark, N. (2017). "Why Video Game Genres Fail: A Classificatory Analysis". Games and Culture, 12 (5), 445-465. DOI: 10.1177/1555412015591900

Meho, L.I., Tibbo, H.R. (2003). "Modeling the information-seeking behavior of social scientists: Ellis's study revisited". Journal of the American Society for Information Science and Technology, 54 (6), 570-587. DOI: 10.1002/asi.10244

Jin Ha-Lee SCOPUS output:

Helen Tibbo SCOPUS output:

Jin Ha-Lee WOS output:

Helen Tibbo WOS output:

Jin Ha-Lee Google Scholar Profile:

"Modeling the information-seeking..." SCOPUS/PlumX output:

"Why Videogame Genres..." SCOPUS/PlumX output:


"Modeling the information-seeking..." Altmetric output:


"Why Videogame Genres..." Atlmetic output: