Enka Blanchard
This short note was written as a consequence of noticing reviewing issues in a conference 1 — which will not be explicitly mentioned here2. After a first fruitless exchange with the PC chairs and a second exchange with the steering committee members, I noticed that more fundamental issues were at hand and deserved being made explicit. The goal is not to publicly attack anyone but to clarify these issues and hopefully change opinions and protocols (for this venue and maybe others).
But first, we must go back to basics, to the source of the system — before looking at how it can get perverted, and how the argumentation behind this change is akin to many that we observe in discussions of AI and algorithmic justice.
Why peer-review 3?
What is the goal of peer-reviewing, the justification behind a process which is both imperfect and still expensive (in time and effort if nothing else)? The most commonly stated objective is to ensure that the articles that end up published are “correct”, or rather, that they follow a reasonable methodology and give solid evidence for their assertions — with the criteria varying highly between mathematics, exact and social sciences, and humanities. This reviewing step is often what separates science from non-science — both in our opinions as scientists and as a social practice.
Beyond deciding whether the research is correct (or at least methodologically sound4), a second objective exists (although it is more rarely made explicit), related to the first : gauging an article’s importance and novelty. Guessing from the reviews whether readers will read (and cite) the paper, and whether they will consider it important enough to keep increasing the journal’s prestige (as well as its lobbying power to get subscriptions from academic institutions).
In many prestigious instances, the reasoning goes as follows : once the reviews come in, the editors or chairs start by removing the papers with obvious flaws. Then, they choose the most popular ones among the rest — which are all presumably correct. This is far from perfect, for two different reasons. First, the most popular ones tend to be the ones claiming the most surprising results (the novelty factor), which have a relatively high chance of not being replicable, stemming from statistical “luck”, mistakes or fraud, with prestigious journals in particular suffering from this problem. Second, whether an article is accepted or not is in many ways a question of luck, as shown in the famous NEURIPS study5.
Although this is already far from ideal, more serious concerns appear when we start looking at less prestigious venues, especially the ones on the lower end of the scale. This is no attack against them : any new venue tends to start there, and I strongly believe in the value of small conferences and non-selectivity (when it comes to importance rather than correction, as we’ll see shortly). Smaller venues suffer from higher risk : if the conference has too few attendees, it might not be eligible for funding and could be canceled or leave organisers in the red6. Having more presenters (even if the research is of lower quality) guarantees more attendance and a longer programme, which can seem a good idea. However, it creates an incentive to disregard serious issues in some papers to be able to pad the list.
We should remember that this is already a best case scenario which leaves aside a lot of serious issues. The profit motive, for one, especially when the conference has high registration costs7 (without even going into the question of predatory venues). One can also mention the issues of biased reviewers (who can guess whose work it is and either accept as a favour or reject to promote their own school of thought), and even conferences choosing to artificially reject a lot of excellent papers8 because it is a necessary condition to obtain a good rating/ranking from external observers (such as CORE), with the rejection rate being often used as a proxy for the venue’s quality. But let’s get back to peer-reviewing.
Separating correctness and novelty
My training comes from a field quite specific in its practices (mathematics), in which correctness can be hard to evaluate. Or rather where being sure of it takes hard work and a lot of time. Evaluating the methodology of an empirical paper and making a thorough review often takes me a few hours at most (and sometimes can be done in 20 minutes). Mathematical papers with proofs can take upwards of a week of full-time work to check (and that’s for ones that are much much simpler than works like Perelman’s or Mochizuki’s — whose papers’ “evaluation” is still underway and extremely contentious after 12 years). This is, thankfully, not the general case (or peer-review simply would not work at all, rather than partially).
This very high cost of evaluation is one of the reasons why I now support open reviewing, as well as decorrelating two different aspects in our reviews. The first is the correctness, as above, which is the most time consuming part. The other is the novelty and importance, which can often be gauged by skimming the paper : if it draws the reader in and makes them want to reuse the ideas or build up on the paper, that’s a good sign. Some venues already only select for correctness (such as PLoS One, which suffers from a nasty profit motive however, charging between 1k$ and 2.3k$ for online publication, although this is still quite below PLoS Medicine).
There is a fundamental difference between correctness and novetly/importance. The first is, presumably, close to objective (as long as Feyerabend’s disciples are not around). Either flaws exist or they don’t, which depends on what is accepted as common methodological practice but evolves slowly. The second one is much more fluid and subjective, depending on what interests the community. Any given venue should hopefully agree in its assessment of the correctness, albeit the novelty and importance can legitimately vary to a large extent.
Aggregating scores
I have once been in a PC where the set of papers that was accepted exactly corresponded to the set of papers whose aggregate rating was above a given threshold. This was surprising to quite a few PC members and was mostly considered to be a random outcome as multiple papers were debated and scores played at most a small role. However, it also reflects that the reviewers already had had time to change their notes and converge on most papers, at least the non-consensual ones.
Aggregating the ratings (with accept typically being equivalent to “+2” and reject “-2”) can give a first order approximation, which can potentially be made a bit more “accurate” by pondering each note by the reviewer’s confidence. However it is just that : an approximation. The strength of peer-reviews lies in the review, not the grade given. If we understand correctness and novelty as two different dimensions, the flattening to a single metric becomes a problem, which we’ll see below by comparing the a naive “algorithm” that uses aggregate rating and a more refined one that uses the qualitative aspects.
Let’s consider a paper with two reviews : one of them judges the paper’s methods to be flawed and gives a reject (the importance being irrelevant). The second reviewer misses the flaw and considers that the claim that the paper gives a cure for cancer is of utmost importance, giving an “accept”. If we average out the scores, it becomes a borderline paper, and has a chance at being published. In a prestigious venue, even a single negative review can doom the paper so the problem rarely comes up. But in ones that have to accept less consensually excellent papers, the novelty factor can trump the correctness — whereas the latter should remain the primary consideration in peer-review.
Adding more reviewers can actually make matters worse. More reviewers per paper imply more reviews, more time spent, and less time spent per paper. It also means a higher tendency to ask reviewers to judge science that lies beyond their field of expertise. And any single reviewer that does their job with limited rigour introduces more risk of accepting papers with fatal mistakes. This is especially true as we generally consider that a “reject” needs to be motivated with a detailed review, whereas “this paper is excellent, we should accept it” can sometimes be considered a full review.
Running the numbers
To give a more practical example, let’s thus consider a world where papers are judged only by the aggregate metric (with grades from -2 to +2), with the paper being accepted if it has a non-negative score. Let’s also consider that each flaw has a 1/2 chance of being noticed by reviewers on average (which is more generous than my recent observations in multiple PCs), in which case the paper gets an automatic reject, and that otherwise the rating is random (which is a strong component, at least for importance, see footnote 5).
With two reviewers, a correct paper has a probability 15/25=60% of being accepted. On the other hand, a flawed paper gets in with probability 25%. With three reviewers, this becomes 57.6% and 16.2%. This is not great but not catastrophic. If the probability of noticing the flaw goes to 1/4 however, the flawed paper probability of getting accepted jumps to 41.3% and 34.4%. We can also consider adding a reviewer who always says accept, with probability 1/4 for example. With two reviewers, a correct paper will then get accepted with probability 77.5% and a flawed one with probability 57.8%.
Now let’s consider not simply aggregating the notes but looking at the reviews in details. Any reviewer who sees the real flaws pointed by others would presumably align their views (or at least, a vast majority should), whereas judgements of importance should be left mostly untouched (or rather, in a symmetrical fashion). Hence, if a single reviewer catches a mistake, the paper gets rejected, otherwise it depends on random judgements as above. This would not change the probability of getting accepted for correct papers, but the probability for flawed ones would drop : 15% with two reviewers and 1/2 chance of finding the flaw (33.8% if the chance of finding the flaw is 1/4). Respectively 7.5% and 25.3% with three reviewers. Even with the lazy reviewer, as long as they still update their reviews, the probabilities remain manageable with more reviewers : 53.3% for two, 22.6% for three and it decreases quickly afterwards.
Reasonably, this gives us a simple algorithmic process for limiting the publication of flawed papers. Look at the reviews qualitatively and if any flaw is found, ask the other reviewers to confirm and update (or show that the flaw was due to a misinterpretation, which can happen). Once this is done, consider that the average grade probably reflect the importance and, if truly needed, use an averaging method. Although a better way would be to have a PC discussion to also allow questions of which themes should be encouraged or how to give more opportunities to some types of works/younger researchers. And when in doubt, if a reviewer finds fatal errors that the others can’t explain away, reject the paper.
The case at hand
The algorithm above is what I’ve seen performed and how I tend to work. Even in conflictual cases (especially in social science journals), where concerns over methodology can be harder to arbitrate, it works quite well in my experience. This brings us to the present, however, and to a conference for which I’ve been reviewing for a few years. All the problems I’ll point to in what follows were presumably already present, but I did not or could not notice them at the time, which also explains how I chose to act. I was given three papers to review. All of them were flawed in my opinion, albeit to different extents. The worst offender featured — among multiple other issues — a user study meant to evaluate how usable and easy to understand a design was. Except that the participants in the study were the system designers themselves (and they did not give it a great grade, proof of their honesty maybe but also a worrying indicator). I wrote a detailed review of the paper pointing out its flaws and that it contained no real science (which would not have been fixable by the conference deadline), and was not the only one to do so. I did not advocate for strong reject (except in the PC-only comments) as that was restricted to plagiarism/out-of-scope. However, two other reviewers decided to accept the paper, with minimal reviews. My comments were left unread and unheeded so I left it in the hands of the PC chairs, thinking that they would follow the reasonable algorithm. The other two papers I also rejected, although not as harshly. I believe that this was mostly a question of (bad) luck and not of strict standards as I’ve previously written good reviews for this conference. Thus, it came as a great surprise when I saw that all three papers had been accepted. I followed protocol and messaged the chairs to get an explanation, which brings us to the final part of this piece.
Objectivity and algorithmic justice
After sending a detailed complaint to the multiple PC chairs, I received a single email from the main chair, featuring the following (as excerpts to protect anonymity) :
We trust […] the power of collaborative decision making (we focused on the average review score for each submission, weighted by reviewer’s confidence). […]
The acceptance [for the conference] was taken by focusing on submissions with nonnegative average review scores (weighted by reviewer’s confidence).
I was once again quite surprised by this decision, as I thought the paper could have fallen between the cracks, and did not think its acceptance was due to a strict adherence to policy. The decision method detailed above is badly flawed (especially in the presence of bad actors), as even had I put a “strong reject”, it would not have changed the outcome.
I decided to escalate and contacted all members of the steering committee with a summary of the argumentation above, expecting not that they would reject the paper (it was too late) but that they could update the visibly flawed procedures. I then received a new email which allowed me to see the extent of the methodological flaw.
I agree that many reviews are minimal and don’t give enough arguments/comments to justify the proposed decision and help improve the paper. However, relying on qualitative evaluation is unacceptable since this may lead to subjective decisions. Please note that the score is weighted with the expertise.
The discussion between reviewers of a paper is always encouraged but a meeting with all reviewers is not possible for practical reasons. We had always a delay with the reviewing process and we never had enough reviewers.
To summarise, the person responsible agreed that the reviews were of bad quality, but considered that averaging what can be considered equivalent to random noise would be better than risking making “subjective decisions”. This is compounded by the fact that the decision-making procedures were not collegial and the whole process was not transparent at all. Using algorithmic protocols to decide hard cases is an option (what happens if two reviewers disagree on whether it is correct and both are persuaded they are correct), but this is not the present usecase. Instead it’s using an algorithm to outsource the decision-making to limit accountability.
This is a pattern that people have increasingly denounced over the last decade : using an algorithm not as a way to make better decisions, but to take the decision out of human “subjective” hands, and into the machine to give it a veneer of objectivity. It is still just as subjective as before however, it’s just a step removed. Automatic decision-making from flawed data combines the worst of both worlds : it’s garbage in, garbage out, but without any clarity on why the decision is taken and a much harder time critiquing it. Moreover, for “collective decision-making” to work, some very specific constraints are required (Condorcet’s theorem, for example, requires independence among jury members, which is nearly impossible to achieve).
And yet it is a repeating pattern, especially seen in discussions on AI or about the algorithms that are used to deny people social benefits, without suffering from a clear decision-maker who can be held responsible for the massive error rates9. Making the world less accountable and more obfuscated is an issue, and it contaminates more and more fields. In research, allowing such practices to continue reduces the incentives for the reviewers to do their job correctly. It also means that authors can afford to simply send their papers many times without making changes in the hope of getting lucky once10. All of which makes the system increasingly random and worse.
We should ask why we want to publish as much as we do and how those perverse incentives work. Why is rejecting an article a problem ? Because it creates problems for the authors ? Isn’t that putting the metric above the objective that the metric was supposed to represent (the creation of good science) ? But also, considering the actual state of things, rejecting an article will not even prevent it from being published, just delay it somewhat11, and the authors might not even take the opportunity to make any modification. Many colleagues are starting to question our publication practices, which is good (see for example https://nofreeviewnoreview.org/, or the switch to diamond open-access to bypasse the profit motive in scientific publishing). However, we should also look into decreasing the workload of reviewers by rejecting the injunction to publish only in the best venues12, which pushes us to spam them in the goal of getting lucky once or twice, before settling for lower-ranked venue (thus multiplying the reviewing work done by a factor 2 to 5 for no real gain as a community).
Comments are welcome below but will be moderated before posting and anything mentioning the people responsible by name (or the name of the conference) will be deleted. The goal is not to publicly attack anyone but to change opinions and hopefully protocols (for this venue and others).
Footnotes
- People outside of computer science might be interested to learn that CPSCI conferences are often as prestigious as journals (if not more) and count as full publications for the usual metrics. Unlike fields where one sends an abstract and might get a quick review, the process requires the author to send a full paper as for a journal, already formatted to the conference’s expectation. The reviewing is then generally doubly blind, with exchanges online among reviewers (through platforms such as easychair) and sometimes a meeting of all reviewers to decide which papers get included in the final conference programme. Often, this includes efforts to discuss among reviewers who disagree and sometimes modify reviews somewhat, in the goal of converging towards a consensual decision, although this is not systematic. ↩︎
- I have made the choice of warning the PC members by email and sent them the link to this note (after I gave notice to the chairs that I would quit the PC). This choice by itself was controversial among some colleagues who were against the idea of stirring trouble, but corresponds to a deontological obligation in my opinion. Conferences and journals depend on our volunteer work and use our names and affiliations to appear serious and prestigious. As such, we should be held responsible for the use of our support, which requires transparency, such as by making such matters public (only for the ones concerned).
Moreover, this is not just a flaw that is directly visible but required me to suffer from bad luck in the reviews to look into the decision-making process (whose obscurity is in itself problematic, but I initally attributed it to the fact that the conference was quite young and had not refined its processes yet, which could involve some manual fiddling).
Edit on 26-06-2024 : more than a week contacting the PC (which happened before the conference which is now over) I am still to receive a reply or feedback from any of the 20 people I emailed. ↩︎ - It saddens me that insisting why it is relevant matters today. For another anecdote of bad practices, I received a scam invitation for a conference that seemed predatory a while ago. I saw a colleague (end of career, research director and head of a lab) was a committee member so I decided to contact her and ask if the venue was good. She said she was not sure but had accepted their invitation a while back, so I proposed looking into it. After a bit of sleuthing, I found that it was a complete scam : authors could become “invited keynote speakers” by paying enough, and the only goal was to pad CVs, with no real science behind. I mentioned as much to the colleague, expecting her to quit the PC. Instead she replied that she saw no real issue with that, and asked whether “peer-review is really that important for science”. ↩︎
- A common attack on peer-review is that it lets through a lot of papers with fraud or wrong data, which eventually get attacked on sites like https://retractionwatch.com/. This is a misconception : the reviewers generally cannot detect fraud, only methodological issues. Fraud detection depends much more heavily on replication, which is a different issue altogether. Although sometimes, data or analysis feature evident problems (such as when scientists proudly present multiple results with p<0.01 from a sample of 6 individuals observed once each). ↩︎
- Many of the issues in bibliometrics and peer-reviewing can be found in my article with Zacharie Boubli (in French) : Blanchard, Enka, and Zacharie Boubli. “Recherche et dogmatisme: de l’improductivité du productivisme.” Questions de communication 42 (2022): 255-277.
For the central references (in English) pertaining to this very issue, a short list can be found at the bottom of the article. ↩︎ - I am currently trying to organise the second edition of a conference with a colleague and this is one of a few central issues : what happens if the number of papers submitted is too small or too big by a wide margin ? Both cases are problematic in very different ways, especially if one wants to keep registration costs as low as possible (where each additional attendee creates a negative marginal cost). ↩︎
- I have seen some with standard registration costs between 1k€ and 2k€ (such as CHI or Usenix Security). I have also seen others, with a similar quality, with costs below 100€, which sounds reasonable to me, or ones around 500€ that include lodging and food for 5 days. This discrepancy is a source of concerns in my opinion. ↩︎
- A notable conference guilty of this (according to my sources who participated) was ICALP, already considered one of the top if not the top international conference in its field. ↩︎
- This article is edifying, but is only one among too many to count :
https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/ ↩︎ - To be fair, I am guilty of this for one of my papers. It came about when we realised a gaping hole in the litterature when trying to find references and decided to do an empirical study to prove what everyone thought was already shown (and we had no real surprise there). It took years to publish with at least 7 rejections, and in all but one case the paper was rejected because at least one reviewer said “We already know these results, I am sure there are papers on this”, without ever giving a reference. Demands that the complaining reviewer show this reference were always left unanswered (probably because the assumed references never existed in the first place). ↩︎
- A similar argument has been made by Jacob and Lefgren in 2011 on the selectivity of research funding (see the reference below). ↩︎
- As a matter of personal policy, I do not submit to venues with an acceptance rate under 25%, and try to avoid those who are proud of having a low acceptance rate, while favoring venues with 40-60% rates. ↩︎
Main references
Pier E. L., Brauer M., Filut A., Kaatz A., Raclaw J., Nathan M. J. et Carnes M., 2018, « Low agreement among reviewers evaluating the same NIH grant applications », Proceedings of the National Academy of Sciences, 115(12), p. 2952-2957. https://doi.org/10.1073/pnas.1714379115
Jacob, Brian A., and Lars Lefgren. “The impact of research grant funding on scientific productivity.” Journal of public economics 95.9-10 (2011): 1168-1177.
Dougherty, Michael R., and Zachary Horne. “Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences.” Royal Society Open Science 9.8 (2022): 220334.
https://royalsocietypublishing.org/doi/10.1098/rsos.220334
Cortes C. et Lawrence N. D., 2021, « Inconsistency in conference peer review. Revisiting the 2014 NeurIPS experiment », Arxiv. https://doi.org/10.48550/arXiv.2109.09774
DOI : 10.48550/arXiv.2109.09774
Bornmann L., Mutz R. et Daniel H.-D., 2010, « A reliability-generalization study of journal peer reviews. A multilevel meta-analysis of inter-rater reliability and its determinants », PloSOne, 5 (12). https://doi.org/10.1371/journal.pone.0014331
DOI : 10.1371/journal.pone.0014331