Skip to Main Content

Citation and Journal Measures

this page is designed to help Penn State users find the bibliometics measurements used to gauge the impact of researchers, institutions, and journals.

Disclaimer

A cautionary note

Metrics such as citation counts and Journal Impact Factors should be used with great care.

As is delineated in The Metric Tide (Wilsdon, 2015)  "quantitative evaluation should support, but not supplant, qualitative, expert assessment" . 

In particular please note that:

Anyone using citation counts and researcher level (h-index) metrics as a proxy for the quality of research should understand that they are inherently biased due in part to:

  • the immediacy effect (first article in a publication gets more citations)
  • gender bias (articles by women are cited less often than similar articles by men).
  • authors sometimes publish under different names or even variations of their name.
  • being cited doesn't mean it was cited favorably. Citation indexes don't indicate reasons for why something was cited.
  • younger researchers generally publish more papers than did researchers in previous generations
  • emerging scholars and their works may be undervalued as it often takes time for a new idea to become highly cited.
  • scholars in a small field because the field generates fewer citations in total
  • articles in a language other than English are not well counted
  • citations in formats other than articles (ie. books/book chapters) are not well counted
  • the system can be "gamed" by inflating counts with self-citations and citations to one journal or article.

Anyone using Journal Impact Factors as a proxy for quality or prestige of journals should understand that:

  • these cannot be directly compared between disciplines due to disparities in research practices among disciplines (see Moed 2005 or search for articles that discuss normalizing metrics). 
  • these are calculated using Web of Science, which does not begin to cover all journals in a subject and is highly slanted towards the science disciplines and the journal article format. Therefore, a lot of possible citations are missed.
  • metrics across different disciplines should not be compared; the impact metrics for a hard science like chemistry will always be higher than for a social science like business. And the liberal arts are very poorly covered by most metrics.
  • journal editorial policies can potentially boost a journal's impact factor.

 
References Cited:
Moed, H. F. (2005). Citation analysis in research evaluation Dordrecht; Great Britain: Springer. DOI:10.1007/1-4020-3714-7 

Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Higher Education Funding Council for England (HEFCE).  DOI: 10.13140/RG.2.1.4929.1363. 

If using citation metrics for evaluation, consider these best practices