Measuring and reporting research impact is notoriously difficult. Traditionally, two types of measures have been employed, qualitative measurements (such as, for example, peer review) and quantitative measurements, using the citation as the basic unit of measurement. These quantitative metrics, also known by the term "Bibliometrics" are what this guide will primarily focus on.
Using Metrics Responsibly
Quantitative research metrics have the advantage of being objective, transparent, and easily calculated and reported (at least if you have the right tool). However they also have the disadvantage of focusing on the journal article as the primary output, at the expense of disciplines where this is not the case. They can also be susceptible to outliers, can be "gamed", and are open to misinterpretation, especially if used in isolation. It is therefore critical these metrics are used appropriately and fairly. See the 'Statement on the the Responsible Use of Metrics at DCU'.
The Citation as the Basic Unit of Measurement
At the heart of bibliometrics, is the concept that if a peer reads and uses your work in their work (therefore citing your work) then this is an indicator of "impact". That is, you have produced a research output that was at least impactful enough to have influenced the work of a peer. So, if the citation becomes the basic currency, then getting more of them demonstrates more impactful work. Counting these citations, and reporting them in ever more sophisticated ways (for example 'Citations per Publication', or 'Field Weighted Citation Impact') is relatively easy, at least for computer systems, leading to their popularity as concise and objective metrics.
Limitations of Bibliometrics
Whilst bibliometrics are concise and objective, they are far from perfect when demonstrating something as nuanced as research impact. There are a number of significant limitations to all of the most popular bibliometrics in use today, such as...