There are a number of metrics to evaluate the possible influence of authors, articles, journals and other publications. They include publication counts, the h-index, citation counts and journal measures such as the Journal Impact Factor and the SciMago Journal Rank.
Examples of questions that can be answered by these types of metrics are:
A basic metric to measure author productivity is the publication count. The publication count is the total number of publications produced by an author. The more publications a researcher, unit or institution has, the more active they are within the research community.
A citation means that a scholarly work has been cited in the text and reference list of a publication. Citations counts indicate the usage and engagement with the cited work by other authors. The citation count has been used as a proxy for quality – with quality indicated by a high level of citations, although this is not always the case and citation counts alone as an indicator of influence/quality of an output are unable to determine:
An example of a highly cited paper where the study was flawed, ethical behaviour of the research team was questionable who claimed incorrectly that the combined measles, mumps and rubella vaccine caused autism in children has been cited 2559 (Google Scholar, 15/6/17). The retracted paper Illeal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children, has been scientifically discredited but continues to be cited. See https://sciencebasedmedicine.org/lancet-retracts-wakefield-article/ for more information.
The message here is not to use citation analysis in isolation – it is just one part of the story.
Consider citation analysis when you want to find:
The ‘count’ of citations differs according to the citation tool used because different databases have different content coverage including the number of publications, and the years indexed.
Example: Scopus has better coverage for Education than Web of Science because Scopus is more interdisciplinary and Web of Science specialises in the scientific disciplines.
In all disciplines, citations take time to accrue. However, some disciplines (e.g. chemistry and biomedical science) have faster peer-review and publication processes and consequently higher citation rates than others (e.g. education, creative industries). Publishing norms and citation patterns can differ between disciplines. Consequently, there are different ways to track and measure influence across subject areas.
The h-index is a variation on the concept of times cited and is an author level metric that attempts to measure both the productivity (number of papers) and the citation impact of a researcher’s publications. It provides a mechanism for the work of individual researchers to be compared with others in the same discipline.
Example: If a researcher has an h-index of 4, this means that the researcher has four papers that have each been cited four times or more.
Example: If a researcher has an h-index of 15, the researcher has fifteen papers that have each been cited fifteen times or more.
The h-index is a simple, cumulative indicator of research performance and can be calculated using citation tools such as Web of Science, Scopus and Google Scholar Profile.
Remember 2 basic rules when using the h-index:
Hirsch (2005) provides a strong caveat for use of the h-index:
“Obviously a single number can never give more than a rough approximation to an individual’s multifaceted profile, and many other factors should be considered in combination in evaluating an individual" (Hirsch, 2005, p. 16571).
Assessing the quality of a journal may involve looking at:
Journal ranking and journal impact factors are quantitative measures which attempt to rank and estimate the importance and performance of a journal in a particular field.
Note that both Scopus and Web of Science provide access to journal quartiles based on their individual metrics.