Antonis Faras
5 min readJul 29, 2024
The Doctor, 1891 by Luke Fides

I.Reflection on Fraud by Numbers: Metrics and the New Academic Misconduct by M. Biagioli

Read the article here: https://lareviewofbooks.org/article/fraud-by-numbers-metrics-and-the-new-academic-misconduct/

In his article, Biagioli notices two distinctive features of the modern university: gaming and excellence, which results in a shift from the traditional and more qualitative method of peer review to a more managerial and target-oriented culture. Concepts (such as excellence) have become greatly dependent on metrics and quantitative targets and seem to draw their meaning from them. For the critics of this shift, regimes of evaluation are left to define the values and the distribution of economic resources for universities and for the advocates, they represent the introduction of transparency and accountability.

Biagioli is able to analyze the situation in order to find the ways in which concepts like value and quality, and “publication” are being radically reshaped in the age of metrics. In this environment, new forms of academic misconduct have multiplied such as fake peer reviews, citation rings (a citing agreement between scientists of different disciplines), and coercive citations (quid pro quos). Universities are considered as brands in terms of publication and visibility and fake collaborations are thriving. An even more radical approach involves databases hacking.

These practices are not only limited to scientists but also to journals and universities and they illustrate the vulnerability of metrics and loopholes on the regimes of evaluation. The author considers them as postproduction manipulations and states that the variety of metrics-misconducts and attempts to face them are creating more manipulations. The regimes while working as disciplinary techniques they also create a causal link:

as they involve complexity, the opportunities for manipulations also grow.

Although we can understand the ethical and also financial implications of these new misconducts, in terms of policy they are not recognised as manipulations (not even as plagiarism) because they do not falsify evidence or appropriate ideas. They are treated more like an elaborate technique of social engineering based on data and metrics. In this environment, we have seen the rise of a new literary technology: “Scientometrics”, that has become the backbone of an emerging economy of academic value and credit.

High-impact publications, achieved by manipulations, mean that the researcher can request a better salary and perks. The university that “sings” will benefit from the publications to attract funding and elite students. And so, citations have become tokens (or more precisely future promises) of value. Scientometrics have enabled instruments of transactions (such as the Journal Impact Factor (JIF) and Elsevier’s CiteScore, that the author analyses later on) that separate actual citations from the object of the transaction (impact).

JIF rates articles that have not yet received the number of citations they are expected to, based on the journal’s historical averages. In such a way, new articles can be measured as having an impact when they have just been published. This framing is a jump-start for authors and universities that otherwise would have to wait for years for that kind of evaluation, but is not expanded to new journals thus channeling article submission to established journals and creating a self-fulfilling “prophecy” of impact for them. With an economic analogy, JIF could be considered as a Central Bank issuing it’s currency (impact scores) and regulating its market with it.

The author states that all these have resulted in the monitarizion and relativity of impact with money becoming the equivalent-form of impact. They have not managed to define impact but they rather created products that are blackboxing of the unanswerable question:

What is (or How to Define) Impact?

Moving on Biagioli demonstrates another element of Scientometrics economics: their inflationary rate. Without a clear limit or regulation, the actors would strive to produce more proxies of impact and thus more misconduct. Manipulation is happening after post-production, not related strictly with the paper but related to its acceptance and assessment. So manipulation is facilitated by forming relations to minimize the time needed for attributing impact.

Ideas such as Altmetrics ( tracking other empirical traces like tweets, mentions on Reddit, blogs, and so on) have emerged but mainly as a response to the need to fast assessment of impact and not as an answer to the question of impact or of scientific validity. On the same grounds, the authors clearly states:

JIF has managed to reduce the drawbacks of impact-based metrics from within, but only by gaming the foundations of the whole system, thus effectively hacking time.

We now have a setting of metrics economy where the metadata is worth more than the content, the productive effort which should be evaluated!

Although the author focuses more on the new forms of academic misconduct, it also notices that the traditional cheats are still existing and also receive digital help (with software of image manipulation, such as Photoshop) moving from the laboratory to the computer. Due to the obsession with productivity and the existing digital tools, there has been an increase of recursive fraud with low-quality or repetitive materials.

The author showcases in this way that post-production misconduct and innovations in traditional fraud, although seemingly different, they are products of:

  1. metrics economy and the need to “publish or perish”.
  2. the downfall of content importance, which is regarded as the wrapping paper. Readers and reading seem to be irrelevant!
  3. In this landscape, the only readers that matter seem to be the algorithms!

II. Questions/Comments for the Author

  1. Scientometrics are presented like a technology of classification of science but they seem to be more of an industry of measurements.
  2. In the article, Biagioli examined that this is connected to the negation of definition for the concepts of impact, excellence etc. It is also evident that key concepts are borrowed from economics and reflect the dominant economic models. Could we also consider scientometrics as business-oriented and biased?
  3. Could we make the claim that scientometrics are a science? To make the question more explicit: Is there a general method of scientific concept behind it?
  4. These questions relate to the fact that it is understandable that scientometrics are a product of the scientific community but it is not clear if they are just a product of the community or could it also be considered a structured field of research? We have to acknowledge that there is a literary technology to produce scientific content. Without defying what scientific impact is, literary technology is the closest determinant of impact.
  5. We could argue that Commons-based peer-production (CBPP) constitutes today an important driver for innovation and cultural development, both online and offline.

Is this kind of production more suitable in order to search for the definition and assessment of science impact without the influence of the market?

Thank you for your time reading this,

Antonis.

Antonis Faras
Antonis Faras

Written by Antonis Faras

Technology Manager and Researcher, Member of sociality.coop, Ph.D. Candidate at NKUA. Interested @Digital Technology, Maintenance, Economic Alternatives

No responses yet