Online Databases: Measuring Impact and Quality
By Carol Tenopir -- Library Journal, 09/01/2007
Academics determine their contribution to their discipline by seeing how often their work is cited. They often judge journal quality by that journal's impact factor. Such citation measures are now well established as a method to rank the importance of journal articles and steer readers to quality research.
The basic idea behind citation measures is that the more something (or someone) is cited by others, the more important or higher quality the work. Unlike counting article downloads, citation counting implies quality recognized by another author. Downloading accounts for the perceptions of readers, novices and experts alike; citation measures depend on recognition by an expert.
Thomson ISI Citation Databases (Web of Science [WoS]) pioneered cited references in databases. For many decades it was the only major commercial system to use citation measures. Since 2004, Elsevier's Scopus and Google Scholar have also provided various citation measures, including impact factor and number of cited references. Many reviews have compared these systems' coverage and search, and Peter Jacso's Digital Reference Shelf regularly analyzes new features.
First, last, or wrong
Every measure, of course, also has its downside. Authors may cite something they disagree with, rather than believe is of high quality; an author who publishes a lot may accumulate the same number of citations as an author with one seminal and highly cited paper. Because citing patterns vary by subject discipline, journal impact factors should not be used across disciplines nor to measure quality of individual authors.
Also, impact factors of entire journals do not accurately measure the value of individual articles or authors. A recent United Kingdom Serials Group (UKSG) study explored an alternative usage-based factor that would rank journals or articles by number of downloads.
New measures may help refine and solve some of the existing limitations of assessing quality. The “h-index,” or “Hirsch” index, has generated much buzz in the scholarly community. It was suggested in 2005 by physicist Jorge Hirsch, a prolific and highly cited author; it was added to WoS in 2006 as part of its author tools and to Scopus Citation Tracker in June 2007.
This index helps assess the impact of an individual author's total output. It takes into account both the quantity and quality of publications. It especially helps to identify distinguished scientists whose publications are consistently highly cited, as long as the author mostly publishes in a single discipline.
To calculate an h-index, search for all articles by an author, then rank them by the number of times they are cited. To make a manual calculation, count down the list until the number of papers equals the number of citations. (WoS and Scopus do this automatically.) An h-index of ten means an author has published ten papers with at least ten citations each. Highly influential researchers over time have higher h-indexes that will never go lower. Nobel prize winner Linus Pauling's h-index is 91; in information science, Rutgers faculty Nicholas Belkin and Tefko Saracevic top the list with an h-index of 20 and 19, respectively.
As with any measure, the value of the h-index has been questioned, and refinements have been suggested. For example, it doesn't distinguish between single-authored and coauthored works and requires a complete publications list over time for accurate computation.
This may seem like so much bean counting, but academics and librarians worldwide are under pressure to prove their value by external measures. Also, novice scholars need tools that help them identify most important authors or articles, as publications proliferate.
Collection development librarians require quality measures for journals, and directors are often asked to help prove their institution's productivity. Be ready for more such measures that help evaluate the quality and impact of authors, articles, and journal titles.
|Peter Jacso's Digital Reference Shelf
|UK Serials Group usage factors study
|Web of Science, Scopus, and Google Scholar
|Carol Tenopir (email@example.com) is Professor at the School of Information Sciences, University of Tennessee, Knoxville|