Skip to Main Content
Notre Dame 5 Star University
University Library

 

 

    

Advanced Information Research Skills

Main metrics

There are a number of metrics to evaluate the possible influence of authors, articles, journals and other publications. They include publication counts, the h-index, citation counts and journal measures such as the Journal Impact Factor and the SciMago Journal Rank.
Examples of questions that can be answered by these types of metrics are:

  • What are the best journals in the field of (engineering, nursing, sports science,…)?
  • Who is citing my articles?
  • How many times have I been cited?
  • How do I know this article is important?
  • In which journal should I publish?
  • What is my supervisor’s h-index?

Individual metrics

A basic metric to measure author productivity is the publication count. The publication count is the total number of publications produced by an author. The more publications a researcher, unit or institution has, the more active they are within the research community.

A citation means that a scholarly work has been cited in the text and reference list of a publication. Citations counts indicate the usage and engagement with the cited work by other authors.  The citation count has been used as a proxy for quality – with quality indicated by a high level of citations, although this is not always the case and citation counts alone as an indicator of influence/quality of an output are unable to determine: 

  • if the citations were viewed positively or negatively
  • the quality of the journal the articles are published in or cited by
  • the ranking of researchers from different disciplines.

An example of a highly cited paper where the study was flawed, ethical behaviour of the research team was questionable who claimed incorrectly that the combined measles, mumps and rubella vaccine caused autism in children has been cited 2559 (Google Scholar, 15/6/17). The retracted paper Illeal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children, has been scientifically discredited but continues to be cited. See https://sciencebasedmedicine.org/lancet-retracts-wakefield-article/ for more information.

The message here is not to use citation analysis in isolation – it is just one part of the story.
Consider citation analysis when you want to find:

  • how often an output has been cited (times cited) 
  • the total citations and average citation per article for an author
  • the average cited count for articles published in a specific journal
  • additional resources on your topic also called tracking citations.  In addition to checking the reference lists of papers you find useful, you can also check who has cited the paper.

The ‘count’ of citations differs according to the citation tool used because different databases have different content coverage including the number of publications, and the years indexed.
Example: Scopus has better coverage for Education than Web of Science because Scopus is more interdisciplinary and Web of Science specialises in the scientific disciplines.

In all disciplines, citations take time to accrue. However, some disciplines (e.g. chemistry and biomedical science) have faster peer-review and publication processes and consequently higher citation rates than others (e.g. education, creative industries). Publishing norms and citation patterns can differ between disciplines. Consequently, there are different ways to track and measure influence across subject areas.

The h-index is a variation on the concept of times cited and is an author level metric that attempts to measure both the productivity (number of papers) and the citation impact of a researcher’s publications. It provides a mechanism for the work of individual researchers to be compared with others in the same discipline.

Example: If a researcher has an h-index of 4, this means that the researcher has four papers that have each been cited four times or more.
Example: If a researcher has an h-index of 15, the researcher has fifteen papers that have each been cited fifteen times or more.

Benefits
The h-index is a simple, cumulative indicator of research performance and can be calculated using citation tools such as Web of Science, Scopus and Google Scholar Profile. 

  • It is most appropriate for researchers who are established and have published extensively.
  • It measures “durable” performance, not only single peaks, and avoids skewing by one highly cited paper.
  • It is not limited to journal document types can include conference papers and book chapters.

Limitations 

  • It is not a good indicator for early career researchers, as both their publication output and citation rates will be relatively low.
  • It is highly dependent on the length of a researcher’s career, meaning only researchers with similar years of service can be compared fairly.
  • It provides no indication of peaks and dips in publication performance.
  • It is a less appropriate measure of academic achievement for researchers in the humanities and social sciences.
  • It can be inflated by self-citations.

Remember 2 basic rules when using the h-index:

  1. Compare within disciplines.
  2. Benchmark against average or expected citations in a field of research.

Hirsch (2005) provides a strong caveat for use of the h-index:

“Obviously a single number can never give more than a rough approximation to an individual’s multifaceted profile, and many other factors should be considered in combination in evaluating an individual" (Hirsch, 2005, p. 16571).

Journal metrics

Assessing the quality of a journal may involve looking at:

  • Journal metrics – e.g. Scimago quartile for the journal, the prestige metric (Scimago Journal and Country Rank (SJR)), the Journal Impact Factor (based on citation data)
  • Editorial board membership
  • Peer review process
  • Journal indexation
  • Journal scope
  • Publication lag
  • Reach.

For more information see Journal metrics on our Strategic Publishing page.

Journal ranking and journal impact factors are quantitative measures which attempt to rank and estimate the importance and performance of a journal in a particular field.

  • The Scimago Journal & Country Rank (SJR) is a metric representing the number of citations received by a journal and weighted according to the prestige of the journals from which they originated over a three year period.
    • The SJR is an objective measure of overall quality of journals within a discipline. More prestigious journals have higher SJRs.
    • The Source Normalized Impact per Paper (SNIP) can be used to make comparisons between journals from different disciplines and is considered a popularity metric (does not matter where the citation comes from each citation is counted as one, which varies to the SJR with the metric being calculated based on the origin of the citation).
    • Find the journal quartile ranking on this website. 
  • Scopus Compare sources compares up to 10 journals on various indicators including Scimago Journal Rank (SJR), SNIP and CiteScore metrics.
  • The Journal Impact Factor (JIF) is a metric representing the average citation counts of papers published in a journal over a two year period.
    • The JIF is an objective measure of overall quality of journals within a discipline. More prestigious journals have higher JIFs.
    • If the journal has an impact factor it can be found at the journal publisher’s website, or access the Journal Citation Reports for impact factors and quartile ranking per subject for journals indexed by Thomson Reuters.
    • Journal Quartile rankings are derived for each journal in each of its subject categories according to which quartile of the Impact Factor (IF) distribution the journal occupies for that subject category:
      • Q1 comprises the quarter of the journals with the highest values (top 25%).
      • Q2 the second highest values (between top 50% and top 25%).
      • Q3 the third highest values (top 75% to top 50%).
      • Q4 the lowest values (bottom 25% of the IF distribution).

Note that both Scopus and Web of Science provide access to journal quartiles based on their individual metrics.

  • Australian Business Deans Council's Journal Quality List 2019 contains journals in the disciplines of business with each given a ranking from A* to C.  The list is administered by the Australian Business Deans Council.
  • MIS Journal Rankings provides information about the rankings of journals in the area of MIS (management information systems), compiled by the Association for Information Systems.
  • ERA (Excellence in Research for Australia) evaluates the quality of research undertaken in Australian universities against national and international benchmarks. See the submissions journal list for 2023 linked below (not ranked).
  • The Journal Quality List compiled by Dr Anne-Wil Harzing can assist academics to target papers at journals of an appropriate standard. It covers the areas of economics, finance, accounting, management, marketing, tourism, psychology and sociology.
  • Journal measures relate to the entire journal based on average citations. They cannot assess the quality or account for the impact of individual articles in a journal.
  • In research areas such as computer science and engineering where the main form of scholarly communication is conference papers rather than journal articles, the Journal measures may be less relevant.
  • In some research areas such as the humanities and social sciences, the number of journals listed in the tools for deriving journal measures may be low.
  • The coverage of a database providing journal measures may be unevenly distributed across subject areas, or not provide measures for some journals at all.
  • Journal measures are available for only a small number of journals that publish in languages other than English.