Skip to Main Content

Measuring & Reporting Research Impact: Home

Introduction to Quantitative Research Metrics

Measuring and reporting research impact is notoriously difficult. Traditionally, two types of measures have been employed, qualitative measurements (such as, for example, peer review) and quantitative measurements, using the citation as the basic unit of measurement. These quantitative metrics, also known by the term "Bibliometrics" are what this guide will primarily focus on.

Using Metrics Responsibly 

Quantitative research metrics have the advantage of being objective, transparent, and easily calculated and reported (at least if you have the right tool). However they also have the disadvantage of focusing on the journal article as the primary output, at the expense of disciplines where this is not the case. They can also be susceptible to outliers, can be "gamed", and are open to misinterpretation, especially if used in isolation. It is therefore critical these metrics are used appropriately and fairly. See the 'Statement on the the Responsible Use of Metrics at DCU'.  

Bibliometrics 101

The Citation as the Basic Unit of Measurement

At the heart of bibliometrics, is the concept that if a peer reads and uses your work in their work (therefore citing your work) then this is an indicator of "impact". That is, you have produced a research output that was at least impactful enough to have influenced the work of a peer. So, if the citation becomes the basic currency, then getting more of them demonstrates more impactful work. Counting these citations, and reporting them in ever more sophisticated ways (for example 'Citations per Publication', or 'Field Weighted Citation Impact') is relatively easy, at least for computer systems, leading to their popularity as concise and objective metrics.

Limitations of Bibliometrics

Whilst bibliometrics are concise and objective, they are far from perfect when demonstrating something as nuanced as research impact. There are a number of significant limitations to all of the most popular bibliometrics in use today, such as...

  • Disciplinary discrepancies: The most popular traditional bibliometrics are based upon the original journal indexation systems. As such they focus primarily on the journal article citation as the basic unit of measurement. Researchers in disciplines in which the journal article is not the primary focus for publishing may not be fairly or truly represented.
  • Coverage limitations: Even if the primary output venue is the journal article, limitations remain. As already stated the most popular systems are based upon the journal indexation systems. These systems only index a certain amount of journal content, and for a citation to be counted, your original article and the citing article must both be indexed in the bibliometric system. This is particularly problematic for niche research areas and for local publications ("Local" here meaning local in the global sense, i.e. Irish journals).
  • Numbers in isolation: Relying exclusively on bibliometrics leads to misinterpretation of the numbers. A h-Index value depends as much on length of career, breaks in research during that career (for administrative or teaching purposes), and the discipline as much as it does on citations. It is of no value to compare the h-index of a physicist to a historian, nor the -h-index of recent post-doc with that of a professor.
  • Statistical anomalies: Outliers can occur, particularly in smaller datasets. For example, an early career stage researcher can be named as an author (quite correctly) on an (e.g. astronomy) article with 800 authors. This article becomes very popular and is widely and quickly cited. With a relatively small publication set (as to be expected) and now a huge number of citations, the researcher gains a citations per publication score par excellence! This is not to say that anything underhand has occurred, and of course the author should be very pleased, just that it is worth looking behind the numbers.