Skip to Main Content
University of York Library
Library Subject Guides

Bibliometrics: a Practical Guide

Publication strategies

How to choose a journal

You might want to chose a journal with a higher journal impact factor (see below) to promote the visibility of your work, but other factors are just as important:

  • A good subject fit
  • Likelihood of being accepted
  • Reputation of the journal in your research community
  • Turnaround time
  • Open research practices etc.

FiThink Check Submit logonally, check your chosen journal is valid and credible. Basic checks to avoid predatory journals can be found on Think.Check.Submit.

 

Open Research

Have you ever been excited about discovering an interesting new paper only to find that the information you need is behind a paywall? Do not allow this to happen to your own work.open access sign

Open Research practices can be embedded in all the different stages of a project. At publication stage, consider sharing access to your data and preprint, and favour journals with open peer review and/or open access publication. Under the University's Research Publications and Open Access Policy, authors now retain the rights to make their accepted manuscript openly available. This is achieved by granting the University a non-exclusive licence to make manuscripts publicly available. More information can be found in a separate subjectguide.

 

 

Journal-level metrics

All journal-level metrics are based on average citation counts and do not reflect the quality of the peer-review process, scrutiny of published research methods or rejection rates.

 

 

Background: Originally, the first journal-level metric, the Journal Impact Factor (see below), was developed to support library indexing and purchasing decisions for their journal collections [Garfield, 1963]. Nowadays, they can also help you target the right journal for your manuscript. A variety of journal-level metrics measure the visibility ('impact') of a journal based on its average citations. Journal-level metrics are easily skewed e.g. if a single article is very highly cited (a 2010 paper in Nature explains how this can happen).

Remember: As journal level metrics reflect the aggregate citations of all publications in a journal, they cannot be used to measure research impact of a single paper or even author published in said journal.

 

Publication in a highly visible ('high impact') journal allows no conclusion on the research impact of the study or the authors.

Publish or Perish?

Unfortunately, there are still many research and hiring practices that foster a research culture of quantity over quality for research publications. Especially early-career researchers are often under a lot of pressure to publish.tombstone inscribed 'published and still perished'

However, some manuscripts gain in importance with additional data and time. Only you can decide whether your manuscript is ready for submission. Dividing great scientific discoveries into several less significant publications (“smallest publishable units”) can render your big idea unrecognisable and research excellence might suffer.

Additionally, publication practices vary hugely between research fields and career stages - don't base your publication strategy on the publication rates of your colleagues.

In a collective endeavour, the research community at York aims to transform our research environment, allowing people to thrive and research to be conducted to the highest standards.

Journal-level metrics

The Journal Impact Factor (JIF), published by Clarivate, uses citation data from the Web of Science database. All information on JIF are collated in the Journal Citation Reports, which also include Eigenfactor rankings and a five-year JIF.

Example:
2022 impact factor of a journal:

200 = citations in 2022 for publications from 2020 and 2021
73   = number of articles and reviews published in 2020 and 2021

2022 impact factor is = 2.74 (200/73)

Calculation: The count of citations over the last two years for current year publications is divided by the number of articles and reviews published over the two year period.

Remember: Citation patterns vary by discipline. As the JIF is not field-weighted, a high JIF for one discipline might be high or low for another discipline. Although for the majority of disciplines, articles reach a citation peak after around two years, some fields have a slower citation rates. The 5-year Journal Impact Factor may be a better choice in these cases.

Compared to the clinical, natural and social sciences, the type of content that is considered to be of scholarly importance, norms for reviewing content, and citation behaviour may differ significantly in the arts and humanities. Web of Science have therefore created a specially curated Journal database, that incorporates these differences by using a set of 28 criteria. Although citation activity is still used as the primary impact indicator, four 'impact' criteria are combined with 24 ‘quality’ criteria to create the Arts & Humanities Citation Index.

Journals on the Arts and Humanities Journal Index (filter Master Journal List for 'Web of Science Coverage' on left) do not receive Journal Impact Factors as their publication patterns are so different from other disciplines. However, since 2020, Web of Science provides a field-normalised Journal Citation Indicator for all journals they index, including their Arts and Humanities core collection (access through their Master Journal List, filter 'Web of Science Coverage' on left).

The CiteScore, published by Scopus, uses citation data from the Scopus database. You can access a full list of journals and their citescores through Scopus, which can be filtered by subject area.

Example:
2022 CiteScore of a journal:

400 = citations received between 2019 - 2022 to 'citable items' published between 2019 - 2022
146   = number of citable items published in 2019 to 2022

2022 CiteScore is = 2.74 (400/146)

Calculation: The count of citations over the last four years for the last four year publications is divided by the number of articles, reviews, conference papers, book chapters and data papers ('citable items') published over the four year period.

Remember: Citation patterns vary by discipline. Similarly to JIF, the CiteScore is not field-weighted, a high CiteScore for one discipline might be high or low for another discipline. However, the larger time period of four (CiteScore) instead of two years (JIF) captures the citation peak of most disciplines.

The Source-Normalised Impact Per Paper (SNIP) reflects the ratio between the 'Raw Impact per Paper', a type of Citations per Publication calculation, actually received by the journal, and the 'Citation Potential', or expected Citations per Publication, of that journal’s field. In contrast to the Journal Impact Factor (JIF) or Citescore, the SNIP corrects for differences in citation practices between scientific fields and considers a three-year citation window. In many disciplines, citations need on average more than 2 years to peak/mature, so a larger citation window leads to more accurate results. Using SNIP, the larger citation window and source-normalisation allow for the comparison of journals from different disciplines. In practice, the SNIP is calculated without a field classification system (compare 'Field-Weighted-Citation Impact') and the normalization of citation counts for field differences is based on the characteristics of the sources from which citations originate. For example, if a journal is cited mainly by publications with long reference lists, the journal finds itself in a field with a high citation density (e.g. cell biology), which means that the journal can be expected to receive a relatively large number of citations per publication. Conversely, if the publications that cite a journal tend to have short reference lists, the journal appears to be active in a low citation density field (e.g., mathematics), and a lower number of citations per publication can be expected for the journal.

You can access the SNIP from the Scopus source list (expand list of journal-level metrics with arrow to right).

screenshot of scopus source list with subject area filter and arrow to right to select SNIP or SJR.

 

SCImago Journal Rank (SJR, based on Scopus data) measures the prestige of citations received by a journal without assuming that 'all citations are equal'. Their methodology is similar to Google PageRank, weighing the value of a citation depending on the field, quality and reputation of the journal that the citation comes from. As SJR takes differences in the publication patterns between disciplines into account, it can be used to compare journals in different fields.

You can access the SJR from the Scopus source list (expand list of journal-level metrics with arrow to right).

screenshot of scopus source list with subject area filter and arrow to right to select SNIP or SJR.