Researchers in the field of scientometrics are quite often robust in their criticisms. Last week, the paper A simple proposal for the publication of journal citation distributions was published online. The contribution by Lariviere et al. (2016) takes a familiar normative stance regarding the 'misuses' and ‘unintended effects’ of indicators, and in particular the JIF. Arguments against the JIF often cite its technical shortcomings, for instance in claiming it is open to manipulation and misuse by editors and uncritical parties. Larivière et al. (2016) argue that the IF "is an inappropriate indicator for the evaluation of research or researchers" because the single numbers conceal "the full extent of the skew of distributions and variation in citations received by published papers that is characteristic of all scientific journals" and therefore "assume for themselves unwarranted precision and significance.” The authors hope that the method they propose for generating the citation distributions that underlie JIFs will "help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment."
Not merely for the sake of argument
Improving validity, reliability and transparency are obviously useful endeavors, but I will argue that it is more helpful if we do not take received assumptions of the JIF as given. Instead, we should position as a focus of intervention the multiple roles and influences of the JIF in actual research practices. My primary concern is this: by limiting solutions to ‘improper’ indicator uses to questions of validity or transparency, we assume that more transparency or better indicators will necessarily give rise to better evaluation practices. Though I applaud sincere, methodologically sophisticated calls for more transparency such as the one made by Larivière et al., I am afraid they do not suffice. The recourse we then take is towards an upstream solution, guided by an optimistic yet also slightly technocratic mode of 'implementation' (De Rijcke & Rushforth, 2015). If journals would indeed start to publish the citation distributions behind their JIFs, what exactly would this change on the shop-floor, in assessment situations, and in the daily work of doing research?
The JIF trickles back up
A general criticism that can be made about the well-known accounts against the JIF is their inattention towards the ‘folk theories’ of the JIF as applied by scientists and evaluators in actual practices (Rushforth & De Rijcke, 2015; see also Aksnes & Rip, 2009). “What characterizes folk theories is that they provide orientation for future action… They are a form of expectations, based in some experience, but not necessarily systematically checked. Their robustness derives from their being generally accepted, and thus part of a repertoire current in a group or in our culture more generally.” (Rip 2006, 349) In our research in biomedicine we for instance found that researchers use these ‘folk theories’ to navigate quite routine knowledge making activities, including selecting useful information from the overwhelming amounts of literature they could potentially read; settling discussions over whom to collaborate with and when; and how much - additional - time to spend in the laboratory producing that data. Given the extent of the embeddedness, I must say I feel ambivalent about statements that the JIF ‘misleads’. For one, not all of these embedded uses are grounded in naïve assumptions about the citation performance of individual papers in particular journals with a certain JIF. Secondly, who misleads whom? These different embedded uses of the JIF will ‘trickle back up’ into formal assessment procedures, because “[a]uditors are not aliens. They are versions of ourselves.” (Strathern 1997, 319)
An example
Let's consider the hypothetical situation of a formal assessment procedure in which a research group in oncology is looking to hire a new professor. The hiring committee has at its disposal the publication lists of the candidates. These lists also specify the JIF of the journals in which the candidates have published. Now suppose that the committee takes a look at the publication list of one of the candidates. The committee members start to compare the journals on the list by way of the JIFs. They see that this researcher mainly publishes in the top-tier journals in oncology. If we take the warning of Lariviere et al. to heart, we would advise the committee members not to conflate these numbers with the actual citation impact of the individual papers themselves. And rightly so. But does this mean that all uses of the JIF in this formal hiring procedure in this particular setting are off limits? Larivière et al. should answer this question with a 'yes', seeing that they disapprove of all uses of the JIF in the assessment of individual researchers. However, I think that in this case it is very well possible to come up with a reasonable motive for using JIFs to support the decision-making process about who (not) to hire. In some biomedical fields, different 'tiers' of journals with certain JIF-ranges can both denote a certain standing in a field and a particular type of scientific work (e.g. descriptive in the lower IF ranges versus causal in the higher IF ranges). So what a committee can do hypothetically in this field, on the basis of the JIFs, is assess whether a researcher mainly does descriptive work or primarily publishes about biological mechanism. In other words, the committee can make a substantive assessment of the type of work the candidate is involved in by looking at the JIF values of the journals in which she publishes (among other characteristics of the journal). And they can use this information, for instance, to deduce whether or not the candidate’s research lines would fit into the rest of the research team they are looking to hire for. The reader of this blogpost will understand that this is merely a hypothetical example. But I hope the point is clear: the JIF can acquire a range of different meanings in actual research and assessment practices.
Conclusion
Larivière et al. put forth a methodologically driven plea to focus not on the JIF but on individual papers and their actual citation impact. Though commendable, I think this strategy obscures a much more fundamental issue about effects of the JIF on the daily work of researchers and evaluators. JIF-considerations have a tendency to either move to the background other measures of scientific quality (e.g. originality, long‐term scientific progress, societal relevance), or to allow them to become redefined through their relations to the JIF and other quantitative performance indicators. In my opinion this insight leads to a crucial shift in perspective. For truly successful interventions into indicator-based assessment practices to happen, I think we need to move beyond too simplistic entry points to the debate of ‘misuse’ and ‘unintended effects’. My hypothesis is that researchers (and evaluators) continue to use the JIF in assessment contexts - despite the technical shortcomings – for the complicated reason that the indicator is already so engrained in different knowledge producing activities in different fields. Our research findings suggest that in calling for researchers and evaluators to ‘drop’ the JIF, people are actually calling for quite fundamental transformations in how scientific knowledge is currently manufactured in certain fields. This transformation is the primary, and also the quite daunting, task.
I would like to thank Alex Rushforth for collaborating with me on the project in biomedicine that I draw on extensively above. The text is partly based on our joint articles that came out of the project.
I would like to thank Paul Wouters and Ludo Waltman for valuable discussions that informed the preparations for this blogpost.
See also the blogpost by Ludo Waltman in response to the same article.
References
- Aksnes, D. W., and A. Rip. (2009). Researchers' perceptions of citations. Research Policy, 38 (6), 895-905.
- Rip, A. (2006). Folk theories of nanotechnologists. Science as Culture, 15 (4), 349-365.
- Rushforth, A.D. & de Rijcke, S. (2015). Accounting for impact? The Journal Impact Factor and the making of biomedical research in the Netherlands. Minerva, 53, 117-139.
- de Rijcke, S. & Rushforth, A.D. (2015). To intervene, or not to intervene, is that the question? On the role of scientometrics in research evaluation. Journal of the Association for Information Science and Technology, 66 (9), 1954-1958.