Impact and impact factors
The way in which we think of medical journals depends an awful lot on where you start. Some academics are obsessed with things called “impact factors” that measure how often papers in a journal are cited (Box 1). Other people make sweeping assumptions about quality of journals, based on a range of items like peer review, whether they have heard of it or not, where it is published, or whatever. Bandolier has certainly heard some singular definitions of journal quality in its travels.
Box 1: Impact factor calculation
An impact factor for a journal attempts to provide a measure of how frequently papers published in a journal are cited in the scientific literature. It is derived by dividing the number of citations in any one year with items published in the journal in the previous two years. The calculation is as follows:
- total literature citations to substantive items published in a journal in 2003.
- number of citations in A that refer to articles published in 2001 and 2002.
- number of substantive articles published in the journal in 2001 and 2002.
The impact factor is then B divided by C, and gives the average number of times an article published in the journal in 2001 and 2002 has been cited in 2003.
Thus if there were 1000 citations in 2003 for 100 articles published in a journal in 2001 and 2002, the impact factor would be 10. Most journals (and there are many, many journals) have impact factors that are below 2. Journals with impact factors above 4 tend to be regarded as having a high impact factor, and those above 10 are stellar.
- E Garfield. The impact factor. Current Contents 1994 20: 3-7.
Impact factors cover a wide range. Some journals achieve impact factors of 25 or more, anything over 4 is good, most journals have impact factors of 1 or 2, while many have impact factors well under 1. An impact factor of 0.1, for instance, indicates one citation for every 10 articles published. How do impact factors relate with professionals' attitudes to journals and their quality? A study  tells us they are related, but not that closely.
A questionnaire was mailed to 416 randomly-selected physicians (practitioners and researchers) in the US. Physicians were asked to rate the overall quality of general medical journals on a scale of 1 to 10, with 10 being highest. Respondents were also asked whether they subscribed to the journal, and whether they read it regularly.
There were 269 returned questionnaires. Respondents had an average age of 46 years, were mostly men, and 26% were registered in a medical subspecialty. Their average responses, and the journal impact factors, are in Table 1.
Table 1: Quality rating of journals by physicians and impact factor
|New England Journal of Medicine|
|Annals of Internal Medicine|
|Journal of the American Medical Association|
|Archives of Internal Medicine|
|American Journal of Medicine|
|Journal of General Internal Medicine|
|Southern Medical Journal|
None of the journals were rated much less than 5 out of 10, or more than 8 out of 10, with less than a two-fold range of quality between high and low. By contract, there was a 15-fold range in subscription and reading levels, and a 163-fold range in impact factors. There was a correlation between physician rating and impact factor.
Perhaps these results are not unexpected. All of the journals in Table 1 are good, respectable journals, publishing interesting and high-quality papers. That physicians rated them as they did makes sense. In a US audience, the lower readership for a UK journal (Lancet) and a regional journal (Southern Medical Journal) did not adversely affect judgement of quality.
There are two lessons. Academic pointy-heads should follow the impact factors. The rest of us should seek out good quality and relevant papers to read, wherever they are published, and make more use of our medical librarians (or knowledge managers).
- S Saha et al. Impact factor: a valid measure of journal quality? Journal of Medical Librarians Association 2003 91: 42-46.