Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Peer reviewing issues : novelty, correctness and objectivity amidst obfuscated practices

Enka Blanchard

This short note was written as a consequence of noticing reviewing issues in a conference 1 — which will not be explicitly mentioned here2. After a first fruitless exchange with the PC chairs and a second exchange with the steering committee members, I noticed that more fundamental issues were at hand and deserved being made explicit. The goal is not to publicly attack anyone but to clarify these issues and hopefully change opinions and protocols (for this venue and maybe others).

But first, we must go back to basics, to the source of the system — before looking at how it can get perverted, and how the argumentation behind this change is akin to many that we observe in discussions of AI and algorithmic justice.

Why peer-review 3?

What is the goal of peer-reviewing, the justification behind a process which is both imperfect and still expensive (in time and effort if nothing else)? The most commonly stated objective is to ensure that the articles that end up published are “correct”, or rather, that they follow a reasonable methodology and give solid evidence for their assertions — with the criteria varying highly between mathematics, exact and social sciences, and humanities. This reviewing step is often what separates science from non-science — both in our opinions as scientists and as a social practice.

Beyond deciding whether the research is correct (or at least methodologically sound4), a second objective exists (although it is more rarely made explicit), related to the first : gauging an article’s importance and novelty. Guessing from the reviews whether readers will read (and cite) the paper, and whether they will consider it important enough to keep increasing the journal’s prestige (as well as its lobbying power to get subscriptions from academic institutions).

In many prestigious instances, the reasoning goes as follows : once the reviews come in, the editors or chairs start by removing the papers with obvious flaws. Then, they choose the most popular ones among the rest — which are all presumably correct. This is far from perfect, for two different reasons. First, the most popular ones tend to be the ones claiming the most surprising results (the novelty factor), which have a relatively high chance of not being replicable, stemming from statistical “luck”, mistakes or fraud, with prestigious journals in particular suffering from this problem. Second,  whether an article is accepted or not is in many ways a question of luck, as shown in the famous NEURIPS study5.

Although this is already far from ideal, more serious concerns appear when we start looking at less prestigious venues, especially the ones on the lower end of the scale. This is no attack against them : any new venue tends to start there, and I strongly believe in the value of small conferences and non-selectivity (when it comes to importance rather than correction, as we’ll see shortly). Smaller venues suffer from higher risk : if the conference has too few attendees, it might not be eligible for funding and could be canceled or leave organisers in the red6. Having more presenters (even if the research is of lower quality) guarantees more attendance and a longer programme, which can seem a good idea. However, it creates an incentive to disregard serious issues in some papers to be able to pad the list.

We should remember that this is already a best case scenario which leaves aside a lot of serious issues. The profit motive, for one, especially when the conference has high registration costs7 (without even going into the question of predatory venues). One can also mention the issues of biased reviewers (who can guess whose work it is and either accept as a favour or reject to promote their own school of thought), and even conferences choosing to artificially reject a lot of excellent papers8 because it is a necessary condition to obtain a good rating/ranking from external observers (such as CORE), with the rejection rate being often used as a proxy for the venue’s quality. But let’s get back to peer-reviewing.

Separating correctness and novelty

My training comes from a field quite specific in its practices (mathematics), in which correctness can be hard to evaluate. Or rather where being sure of it takes hard work and a lot of time. Evaluating the methodology of an empirical paper and making a thorough review often takes me a few hours at most (and sometimes can be done in 20 minutes). Mathematical papers with proofs can take upwards of a week of full-time work to check (and that’s for ones that are much much simpler than works like Perelman’s or Mochizuki’s — whose papers’ “evaluation” is still underway and extremely contentious after 12 years). This is, thankfully, not the general case (or peer-review simply would not work at all, rather than partially).

This very high cost of evaluation is one of the reasons why I now support open reviewing, as well as decorrelating two different aspects in our reviews. The first is the correctness, as above, which is the most time consuming part. The other is the novelty and importance, which can often be gauged by skimming the paper : if it draws the reader in and makes them want to reuse the ideas or build up on the paper, that’s a good sign. Some venues already only select for correctness (such as PLoS One, which suffers from a nasty profit motive however, charging between 1k$ and 2.3k$ for online publication, although this is still quite below PLoS Medicine).

There is a fundamental difference between correctness and novetly/importance. The first is, presumably, close to objective (as long as Feyerabend’s disciples are not around). Either flaws exist or they don’t, which depends on what is accepted as common methodological practice but evolves slowly. The second one is much more fluid and subjective, depending on what interests the community. Any given venue should hopefully agree in its assessment of the correctness, albeit the novelty and importance can legitimately vary to a large extent.

Aggregating scores

I have once been in a PC where the set of papers that was accepted exactly corresponded to the set of papers whose aggregate rating was above a given threshold. This was surprising to quite a few PC members and was mostly considered to be a random outcome as multiple papers were debated and scores played at most a small role. However, it also reflects that the reviewers already had had time to change their notes and converge on most papers, at least the non-consensual ones.

Aggregating the ratings (with accept typically being equivalent to “+2” and reject “-2”) can give a first order approximation, which can potentially be made a bit more “accurate” by pondering each note by the reviewer’s confidence. However it is just that : an approximation. The strength of peer-reviews lies in the review, not the grade given. If we understand correctness and novelty as two different dimensions, the flattening to a single metric becomes a problem, which we’ll see below by comparing the a naive “algorithm” that uses aggregate rating and a more refined one that uses the qualitative aspects.

Let’s consider a paper with two reviews : one of them judges the paper’s methods to be flawed and gives a reject (the importance being irrelevant). The second reviewer misses the flaw and considers that the claim that the paper gives a cure for cancer is of utmost importance, giving an “accept”. If we average out the scores, it becomes a borderline paper, and has a chance at being published. In a prestigious venue, even a single negative review can doom the paper so the problem rarely comes up. But in ones that have to accept less consensually excellent papers, the novelty factor can trump the correctness — whereas the latter should remain the primary consideration in peer-review.

Adding more reviewers can actually make matters worse. More reviewers per paper imply more reviews, more time spent, and less time spent per paper. It also means a higher tendency to ask reviewers to judge science that lies beyond their field of expertise. And any single reviewer that does their job with limited rigour introduces more risk of accepting papers with fatal mistakes. This is especially true as we generally consider that a “reject” needs to be motivated with a detailed review, whereas “this paper is excellent, we should accept it” can sometimes be considered a full review.

Running the numbers

To give a more practical example, let’s thus consider a world where papers are judged only by the aggregate metric (with grades from -2 to +2), with the paper being accepted if it has a non-negative score. Let’s also consider that each flaw has a 1/2 chance of being noticed by reviewers on average (which is more generous than my recent observations in multiple PCs), in which case the paper gets an automatic reject, and that otherwise the rating is random (which is a strong component, at least for importance, see footnote 5).

With two reviewers, a correct paper has a probability 15/25=60% of being accepted. On the other hand, a flawed paper gets in with probability 25%. With three reviewers, this becomes 57.6% and 16.2%. This is not great but not catastrophic. If the probability of noticing the flaw goes to 1/4 however, the flawed paper probability of getting accepted jumps to 41.3% and 34.4%. We can also consider adding a reviewer who always says accept, with probability 1/4 for example. With two reviewers, a correct paper will then get accepted with probability 77.5% and a flawed one with probability 57.8%.

Now let’s consider not simply aggregating the notes but looking at the reviews in details. Any reviewer who sees the real flaws pointed by others would presumably align their views (or at least, a vast majority should), whereas judgements of importance should be left mostly untouched (or rather, in a symmetrical fashion). Hence, if a single reviewer catches a mistake, the paper gets rejected, otherwise it depends on random judgements as above. This would not change the probability of getting accepted for correct papers, but the probability for flawed ones would drop : 15% with two reviewers and 1/2 chance of finding the flaw (33.8% if the chance of finding the flaw is 1/4). Respectively 7.5% and 25.3% with three reviewers. Even with the lazy reviewer, as long as they still update their reviews, the probabilities remain manageable with more reviewers : 53.3% for two, 22.6% for three and it decreases quickly afterwards.

Reasonably, this gives us a simple algorithmic process for limiting the publication of flawed papers. Look at the reviews qualitatively and if any flaw is found, ask the other reviewers to confirm and update (or show that the flaw was due to a misinterpretation, which can happen). Once this is done, consider that the average grade probably reflect the importance and, if truly needed, use an averaging method. Although a better way would be to have a PC discussion to also allow questions of which themes should be encouraged or how to give more opportunities to some types of works/younger researchers. And when in doubt, if a reviewer finds fatal errors that the others can’t explain away, reject the paper.

The case at hand

The algorithm above is what I’ve seen performed and how I tend to work. Even in conflictual cases (especially in social science journals), where concerns over methodology can be harder to arbitrate, it works quite well in my experience. This brings us to the present, however, and to a conference for which I’ve been reviewing for a few years. All the problems I’ll point to in what follows were presumably already present, but I did not or could not notice them at the time, which also explains how I chose to act. I was given three papers to review. All of them were flawed in my opinion, albeit to different extents. The worst offender featured — among multiple other issues — a user study meant to evaluate how usable and easy to understand a design was. Except that the participants in the study were the system designers themselves (and they did not give it a great grade, proof of their honesty maybe but also a worrying indicator). I wrote a detailed review of the paper pointing out its flaws and that it contained no real science (which would not have been fixable by the conference deadline), and was not the only one to do so. I did not advocate for strong reject (except in the PC-only comments) as that was restricted to plagiarism/out-of-scope. However, two other reviewers decided to accept the paper, with minimal reviews. My comments were left unread and unheeded so I left it in the hands of the PC chairs, thinking that they would follow the reasonable algorithm. The other two papers I also rejected, although not as harshly. I believe that this was mostly a question of (bad) luck and not of strict standards as I’ve previously written good reviews for this conference. Thus, it came as a great surprise when I saw that all three papers had been accepted. I followed protocol and messaged the chairs to get an explanation, which brings us to the final part of this piece.

Objectivity and algorithmic justice

After sending a detailed complaint to the multiple PC chairs, I received a single email from the main chair, featuring the following (as excerpts to protect anonymity) :

We trust […] the power of collaborative decision making (we focused on the average review score for each submission, weighted by reviewer’s confidence). […]
The acceptance [for the conference] was taken by focusing on submissions with nonnegative average review scores (weighted by reviewer’s confidence).

I was once again quite surprised by this decision, as I thought the paper could have fallen between the cracks, and did not think its acceptance was due to a strict adherence to policy. The decision method detailed above is badly flawed (especially in the presence of bad actors), as even had I put a “strong reject”, it would not have changed the outcome.

I decided to escalate and contacted all members of the steering committee with a summary of the argumentation above, expecting not that they would reject the paper (it was too late) but that they could update the visibly flawed procedures. I then received a new email which allowed me to see the extent of the methodological flaw.

I agree that many reviews are minimal and don’t give enough arguments/comments to justify the proposed decision and help improve the paper. However, relying on qualitative evaluation is unacceptable since this may lead to subjective decisions. Please note that the score is weighted with the expertise.
The discussion between reviewers of a paper is always encouraged but a meeting with all reviewers is not possible for practical reasons. We had always a delay with the reviewing process and we never had enough reviewers.

To summarise, the person responsible agreed that the reviews were of bad quality, but considered that averaging what can be considered equivalent to random noise would be better than risking making “subjective decisions”. This is compounded by the fact that the decision-making procedures were not collegial and the whole process was not transparent at all. Using algorithmic protocols to decide hard cases is an option (what happens if two reviewers disagree on whether it is correct and both are persuaded they are correct), but this is not the present usecase. Instead it’s using an algorithm to outsource the decision-making to limit accountability.

This is a pattern that people have increasingly denounced over the last decade : using an algorithm not as a way to make better decisions, but to take the decision out of human “subjective” hands, and into the machine to give it a veneer of objectivity. It is still just as subjective as before however, it’s just a step removed. Automatic decision-making from flawed data combines the worst of both worlds : it’s garbage in, garbage out, but without any clarity on why the decision is taken and a much harder time critiquing it. Moreover, for “collective decision-making” to work, some very specific constraints are required (Condorcet’s theorem, for example, requires independence among jury members, which is nearly impossible to achieve).

And yet it is a repeating pattern, especially seen in discussions on AI or about the algorithms that are used to deny people social benefits, without suffering from a clear decision-maker who can be held responsible for the massive error rates9. Making the world less accountable and more obfuscated is an issue, and it contaminates more and more fields. In research, allowing such practices to continue reduces the incentives for the reviewers to do their job correctly. It also means that authors can afford to simply send their papers many times without making changes in the hope of getting lucky once10. All of which makes the system increasingly random and worse.

We should ask why we want to publish as much as we do and how those perverse incentives work. Why is rejecting an article a problem ? Because it creates problems for the authors ? Isn’t that putting the metric above the objective that the metric was supposed to represent (the creation of good science) ? But also, considering the actual state of things, rejecting an article will not even prevent it from being published, just delay it somewhat11, and the authors might not even take the opportunity to make any modification. Many colleagues are starting to question our publication practices, which is good (see for example https://nofreeviewnoreview.org/, or the switch to diamond open-access to bypasse the profit motive in scientific publishing). However, we should also look into decreasing the workload of reviewers by rejecting the injunction to publish only in the best venues12, which pushes us to spam them in the goal of getting lucky once or twice, before settling for lower-ranked venue (thus multiplying the reviewing work done by a factor 2 to 5 for no real gain as a community).

Comments are welcome below but will be moderated before posting and anything mentioning the people responsible by name (or the name of the conference) will be deleted. The goal is not to publicly attack anyone but to change opinions and hopefully protocols (for this venue and others).

Footnotes

  1. People outside of computer science might be interested to learn that CPSCI conferences are often as prestigious as journals (if not more) and count as full publications for the usual metrics. Unlike fields where one sends an abstract and might get a quick review, the process requires the author to send a full paper as for a journal, already formatted to the conference’s expectation. The reviewing is then generally doubly blind, with exchanges online among reviewers (through platforms such as easychair) and sometimes a meeting of all reviewers to decide which papers get included in the final conference programme. Often, this includes efforts to discuss among reviewers who disagree and sometimes modify reviews somewhat, in the goal of converging towards a consensual decision, although this is not systematic. ↩︎
  2. I have made the choice of warning the PC members by email and sent them the link to this note (after I gave notice to the chairs that I would quit the PC). This choice by itself was controversial among some colleagues who were against the idea of stirring trouble, but corresponds to a deontological obligation in my opinion. Conferences and journals depend on our volunteer work and use our names and affiliations to appear serious and prestigious. As such, we should be held responsible for the use of our support, which requires transparency, such as by making such matters public (only for the ones concerned).
    Moreover, this is not just a flaw that is directly visible but required me to suffer from bad luck in the reviews to look into the decision-making process (whose obscurity is in itself problematic, but I initally attributed it to the fact that the conference was quite young and had not refined its processes yet, which could involve some manual fiddling).
    Edit on 26-06-2024 : more than a week contacting the PC (which happened before the conference which is now over) I am still to receive a reply or feedback from any of the 20 people I emailed. ↩︎
  3. It saddens me that insisting why it is relevant matters today. For another anecdote of bad practices, I received a scam invitation for a conference that seemed predatory a while ago. I saw a colleague (end of career, research director and head of a lab) was a committee member so I decided to contact her and ask if the venue was good. She said she was not sure but had accepted their invitation a while back, so I proposed looking into it. After a bit of sleuthing, I found that it was a complete scam : authors could become “invited keynote speakers” by paying enough, and the only goal was to pad CVs, with no real science behind. I mentioned as much to the colleague, expecting her to quit the PC. Instead she replied that she saw no real issue with that, and asked whether “peer-review is really that important for science”. ↩︎
  4. A common attack on peer-review is that it lets through a lot of papers with fraud or wrong data, which eventually get attacked on sites like https://retractionwatch.com/. This is a misconception : the reviewers generally cannot detect fraud, only methodological issues. Fraud detection depends much more heavily on replication, which is a different issue altogether. Although sometimes, data or analysis feature evident problems (such as when scientists proudly present multiple results with p<0.01 from a sample of 6 individuals observed once each). ↩︎
  5. Many of the issues in bibliometrics and peer-reviewing can be found in my article with Zacharie Boubli (in French) : Blanchard, Enka, and Zacharie Boubli. “Recherche et dogmatisme: de l’improductivité du productivisme.” Questions de communication 42 (2022): 255-277.
    For the central references (in English) pertaining to this very issue, a short list can be found at the bottom of the article. ↩︎
  6. I am currently trying to organise the second edition of a conference with a colleague and this is one of a few central issues : what happens if the number of papers submitted is too small or too big by a wide margin ? Both cases are problematic in very different ways, especially if one wants to keep registration costs as low as possible (where each additional attendee creates a negative marginal cost). ↩︎
  7. I have seen some with standard registration costs between 1k€ and 2k€ (such as CHI or Usenix Security). I have also seen others, with a similar quality, with costs below 100€, which sounds reasonable to me, or ones around 500€ that include lodging and food for 5 days. This discrepancy is a source of concerns in my opinion. ↩︎
  8. A notable conference guilty of this (according to my sources who participated) was ICALP, already considered one of the top if not the top international conference in its field. ↩︎
  9. This article is edifying, but is only one among too many to count :
    https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/ ↩︎
  10. To be fair, I am guilty of this for one of my papers. It came about when we realised a gaping hole in the litterature when trying to find references and decided to do an empirical study to prove what everyone thought was already shown (and we had no real surprise there). It took years to publish with at least 7 rejections, and in all but one case the paper was rejected because at least one reviewer said “We already know these results, I am sure there are papers on this”, without ever giving a reference. Demands that the complaining reviewer show this reference were always left unanswered (probably because the assumed references never existed in the first place). ↩︎
  11. A similar argument has been made by Jacob and Lefgren in 2011 on the selectivity of research funding (see the reference below). ↩︎
  12. As a matter of personal policy, I do not submit to venues with an acceptance rate under 25%, and try to avoid those who are proud of having a low acceptance rate, while favoring venues with 40-60% rates. ↩︎
Main references

Pier E. L., Brauer M., Filut A., Kaatz A., Raclaw J., Nathan M. J. et Carnes M., 2018, « Low agreement among reviewers evaluating the same NIH grant applications », Proceedings of the National Academy of Sciences, 115(12), p. 2952-2957. https://doi.org/10.1073/pnas.1714379115

Jacob, Brian A., and Lars Lefgren. “The impact of research grant funding on scientific productivity.” Journal of public economics 95.9-10 (2011): 1168-1177.

Dougherty, Michael R., and Zachary Horne. “Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences.” Royal Society Open Science 9.8 (2022): 220334.
https://royalsocietypublishing.org/doi/10.1098/rsos.220334

Cortes C. et Lawrence N. D., 2021, « Inconsistency in conference peer review. Revisiting the 2014 NeurIPS experiment », Arxiv. https://doi.org/10.48550/arXiv.2109.09774
DOI : 10.48550/arXiv.2109.09774

Bornmann L., Mutz R. et Daniel H.-D., 2010, « A reliability-generalization study of journal peer reviews. A multilevel meta-analysis of inter-rater reliability and its determinants », PloSOne, 5 (12). https://doi.org/10.1371/journal.pone.0014331
DOI : 10.1371/journal.pone.0014331

Soin de l’archive, archive du soin

Le soin, pris en tant que tel, est l’expression univoque d’une force vitale. Le Centre National de Ressources Textuelles et Lexicales le définit comme le fait de « veiller au bon état de quelque chose, d’entretenir quelque chose » et au pluriel comme désignant les « actes de sollicitude, de prévenance envers quelqu’un, [par lesquels] on s’occupe de la santé, du bien-être physique, matériel et moral d’une personne », ou encore comme une « application, manière ordonnée, minutieuse d’effectuer une tâche »[1].

Depuis une vingtaine d’années, le concept de « care », d’origine Nord-Américaine, que « soin » traduit imparfaitement, connait une fortune certaine en France. Cela fait suite notamment à la parution en 2005 de l’ouvrage collectif Le souci des autres[2], de la traduction en 2009 de l’ouvrage de Carol Gilligan Une voix différente[3]et de l’ouvrage de Tronto et. al, Un monde vulnérable : pour une politique du care[4], textes qui ont entraîné par la suite de nombreuses contributions.

L’idée de soin ou de care recouvre à la fois (1) une disposition subjective spécifique (écoute, attention, minutie, don, répétition, humilité, accueil, présence), (2) une axiologie valorisant la dépendance, la vulnérabilité et l’interrelation, et (3) un idéal régulateur qui ne réduit pas le bien-être à la guérison ni au « rétablissement d’un état cliniquement tenu pour normal »[5].

Enfin, loin d’être une simple question d’éthique interpersonnelle, le soin est pris dans des rapports productifs de travail et d’exploitation. Cette exploitation reposant partiellement sur le fait de dénier le travail de soin en tant que tel, c’est-à-dire en tant que travail.  

Nous avons décidé de retraverser l’atelier topographique au prisme de la question du soin en quatre temps. Quel est le rapport entre le soin et l’archive? Entre le soin et la création artistique? Qu’est-ce que prendre soin des corps et des mourants? Quelle est la place du soin dans la rencontre, l’amour et la fête?

1.     L’archive

1.1. L’archive entre spontanéité et réflexivité

Le domaine de l’archive est contenu entre deux modes de persévérance du passé dans le présent : un mode spontané, immanent et pré-réflexif, de l’ordre du modus operandi, du symptôme ou de la résurgence, et un mode institué où les traces du passé sont consciemment indexées, en-registrées sur un mode réflexif. Histoire faite corps, histoire faite chose.  

A ces deux modes répondent deux régimes d’inscriptions : en passant du vécu pur à sa forme archivale, on passe de « l’épiderme du corps propre »[6]  (secrets, traumatismes ou fantômes mais aussi accents, rires, vêtements, silences, regards ou postures) à une connaissance représentationnelle, laquelle s’appuie sur la raison graphique (images, vidéos, mais surtout cartes, listes, tableaux, index et plans) et donc sur une cognition de type synoptique, avec le rapport au monde scolastique qui l’accompagne et qui consiste à « poser des problèmes pour le plaisir de les résoudre, et non parce qu’ils se posent »[7].

Cette connaissance a sa matérialité propre : celle du papier sur laquelle la main pense en faisant rouler la bille d’un stylo à encre, ou bien celle de l’écran d’ordinateur sur lequel, comme le note Derrida, « les lettres restent comme suspendues et flottant encore à la surface d’un élément liquide »[8]. Cette matérialité du support de l’enregistrement détermine en partie la remémoration et donc le mode de persévérance du passé dans le présent : on ne fait pas la même chose avec une vidéo, une photographie analogique ou des lettres centenaires.

1.2.   Trois risques : pertes des témoins, de la trace, de la vitalité

Le passage du vécu à sa forme archivée peut donner lieu à une multiplicité d’usages, esthétiques, politiques, mémoriels, spéculatifs, patrimoniaux etc., parfois fort différents du contexte expérientiel immédiat dont l’archive collecte les traces. Cet écart, comme le soulignait Enzo Traverso au sujet de la mémoire de la Shoah[9], comporte un risque « qui n’est pas [d’oublier] mais de faire un mauvais usage de sa mémoire, de l’embaumer, de l’enfermer dans les musées et d’en neutraliser le potentiel critique, ou pire, d’en faire un usage apologétique de l’actuel ordre du monde »[10].  

Le risque est triple : celui, d’abord celui, inévitable, de perdre l’ « archive charnelle » que sont les témoins (comme ce que nous vivons actuellement avec la disparition des survivants du Khurbn et celle que nous vivrons dans quelques années, des militants queer des années 70). Le risque ensuite, de la disparition ou de l’inaccessibilité des traces (archives administratives détruites par les nazis, archives de la guerre d’Algérie retenues en préfecture[11], accès au fonds Bad Arolsen très restreint avant les campagnes de numérisation successives des années 2000[12], etc).

Enfin, il y a le risque que l’archive, même préservée, accessible et indexée, reste lettre morte : qu’elle ne puisse être réactivée car rien ne reste du monde vécu dont elle est issue. Comme le dernier locuteur d’une langue disparue, comme la pierre de Rosette avant les travaux de Champollion, elle devient alors pur bruit, pure matière sans effet de sens. Plus largement, ne pas « parler la langue » d’une archive, c’est avoir perdu de vue la nécessité, l’intensité vitale, épidermique, le monde de désir dans lequel a émergé la vie qui s’est déposé en elle.

Au-delà de l’instrumentalisation apologétique dont parle Traverso, le troisième risque, plus subtil, est donc celui de la patrimonialisation ou de l’ « embaumement ». Il est le produit de glissements successif dans un univers de valeurs scolastique qui institue l’archive comme chose et objet réflexif, à lui-même sa propre fin. Primat de la carte sur le territoire, du synoptique sur l’intuitif, du rationnel sur le raisonnable, de l’écrit sur l’oral, du produit sur le processus, qui expose aux dangers de la captation par l’État, et de l’institutionnalisation routinière d’un devoir de mémoire contreproductif[13]

Certains usages sociaux de l’archive permettent de sauver le passé de l’oubli en le réactivant dans des univers homologues de désir et de sens. D’autres le figent et le désactivent. Il y a des usages qui soignent et prennent soin de l’archive, qui transforment le bruit en voix. Il y en a d’autres qui éloignent et réduisent la voix à un écho mécanique.

1.3.   Prendre soin de l’archive

Prendre soin des archives, c’est donc les maintenir en vie. D’abord, en prenant soin des témoins, en « réparant » ce qui peut l’être, par l’indemnisation et la reconnaissance officielle (des victimes de la Shoah[14] ou des personnes condamnées pour homosexualité entre 1942 et 1982[15]), la restitution des biens spoliés[16], la restauration des biens culturels détruits (par des politiques publiques en faveur du yiddish et des langues régionales[17] ou le financement de travaux sur les cryptolectes queer comme le kaliarda grec[18]), la prise en charge adéquate du vieillissement des témoins et survivants (par exemple pour des personnes LGBT[19]), éventuellement par des solutions auto-gérées[20], le fait de garantir la possibilité de transmettre et de témoigner dans les écoles, etc. Ce sont aussi les actes de care au quotidien : un petit-enfant qui irait donner à manger à sa grand-mère lesbienne octogénaire et recueillir son récit de vie, par exemple.  

Le soin des archives consiste aussi à indexer et rendre accessibles publiquement les traces, dans les musées[21], les fonds d’archives ou les bibliothèques, à l’aide de politiques publiques[22] et d’instruments de financement.  Un tel atelier ou le petit outil de cartographie collaborative que nous avons développé à l’occasion[23], en fait partie. D’autres projets, d’autrement plus grande ampleur, comme l’European Research Infrastructure[24] financée par l’UE à hauteur de 26 millions d’euros et sur une période de quinze ans, jouent le même rôle. Ce maintien d’une archive FAIR (findable, accessible, interoperable, reusable) est intrinsèquement lié au numérique, puisque toute archive, du moins son image flottant sur une grille de pixels, s’y trouve infiniment reproductible, devenant ainsi bien non-rival et non-exclusif. Cela ne règle pas la question de son « aura », pour reprendre le mot de Benjamin, laquelle reste irrémédiablement privée, et parfois dotée de l’effet cryptophore de certains objets maudits ou sacrés[25].

Enfin, prendre soin de l’archive consiste à lutter contre sa patrimonialisation. L’acte est plus directement politique et matière à controverses. Il ne s’agit de rien d’autre que de garantir que la propriété des moyens de production, de mise en circulation et d’usage des archives n’est privatisé ni par l’Etat, ni par les intérêts de grandes entreprises, et que son usage n’entre pas en contradiction avec leur signification éthique, ce dernier point étant fort complexe. Qui peut se dire héritier légitime et garant moral de l’archive et des témoins à l’heure de leur disparition[26]? Le mémorial Yad Vashem peut-il être dirigé par l’extrême droite israélienne[27]? Une banque peut-elle afficher le drapeau trans sur la devanture de ses boutiques?

Par ailleurs, comment gouverner un commun de l’archive LGBT ou juive? Qui peut, au sens d’être autorisé à et d’avoir les compétences pour, développer les bases de données, les algorithmes, les interfaces? Qui dépose, trie, indexe, numérise? Qui donne accès, qui définit les usages et les diffusions possibles? La question, brûlante, a déjà été investie par des acteurs LGBT (voir à ce sujet l’archive https://bigtata.org/ ou le projet Knowledge of Aids[28]) ou juifs (voir à ce sujet par exemple l’ouvrage de Todd Presner qui paraîtra à l’automne : Ethics of the Algorithm: Digital Humanities and Holocaust Memory[29]).


[1]  https://www.cnrtl.fr/definition/soin, page consultée le 12 juin 2024

[2]  Paperman, P., Laugier, S., & Collectif. (2011). Le souci des autres: Ethique et politique du care (Enlarged édition). Editions de l’Ecole des Hautes Etudes en Sciences Sociales.

[3]  Gilligan, C., & Kwiatek, A. (2019). Une voix différente: La morale a-t-elle un sexe? FLAMMARION.

[4]  Tronto, J. C., Mozère, L., & Maury, H. (2009). Un monde vulnérable: Pour une politique du care. La Découverte.

[5]  Canguilhem, G. (2013). Le normal et le pathologique (12e édition). PUF.

[6]  Derrida, Mal d’archive, p. 39

[7] Bourdieu, Méditations pascaliennes.

[8]  Derrida, Mal d’archive, p. 46

[9]  Traverso, E. (2005). Le passé: Modes d’emploi. Histoire, mémoire, politique. La Fabrique Éditions; Cairn.info. https://www.cairn.info/le-passe-modes-d-emploi–9782913372474.htm

[10]  Ce paragraphe est une citation adaptée de l’article de blog suivant : Simon Apartis (2024, 15 janvier). Savoirs Partagés 4 / 8 | Mémoire(s), archives, patrimoine. Open Knowledge. Consulté le 12 juin 2024, à l’adresse https://ok.hypotheses.org/3061  

[11] https://www.radiofrance.fr/franceculture/acces-aux-archives-secret-defense-de-plus-de-cinquante-ans-toute-la-machine-est-enrayee-1713287

[12] https://arolsen-archives.org/fr/qui-sommes-nous/nos-taches/documenter-et-archiver/

[13] https://journals.openedition.org/histoirepolitique/5448

[14] La question que l’Allemagne puisse racheter sa faute morale (Schuld) en payant une dette (Schuld) étant bien sûr âprement discutée https://www.juedische-allgemeine.de/politik/428-000-juden-erhielten-bislang-entschaedigung-vom-bund/

[15] https://www.assemblee-nationale.fr/dyn/16/textes/l16t0252_texte-adopte-seance#

[16] https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000047874541

[17] https://fr.wikipedia.org/wiki/Charte_europ%C3%A9enne_des_langues_r%C3%A9gionales_ou_minoritaires

[18] https://en.wikipedia.org/wiki/Kaliarda

[19] https://www.liberation.fr/societe/sante/a-lehpad-rendre-la-vie-plus-gay-20211123_JIWE73VGKVBGZMCRR4JWB3S2GU/

[20] https://www.rts.ch/info/suisse/13857685-apres-le-podcast-voyage-au-gouinistan-destination-le-vieillistan.html

[21] Le ministère de la Culture disposant ainsi, par exemple, de 4,2 Md€ de crédits budgétaires en 2023 https://www.culture.gouv.fr/presse/dossiers-de-presse/Budget-2023-du-ministere-de-la-Culture-Projet-de-loi-de-finances

[22] A commencer, par exemple, par l’obligation de dépôt légal instituée en 1537 par François Premier https://www.bnf.fr/fr/quest-ce-que-le-depot-legal

[23] https://nbviewer.org/github/abzsimon/aidsberlinmap/blob/main/visu.ipynb

[24] https://www.ehri-project.eu/project-overview

[25] https://www.slatkine.com/fr/editions-slatkine/73808-book-05210948-9782832109489.html

[26] https://www.fondationshoah.org/recherche/lere-des-non-temoins-aurelie-barjonet

[27] https://www.courrierinternational.com/article/polemique-en-israel-un-extremiste-de-droite-la-tete-du-memorial-yad-vashem

[28] https://www.knowledgeofaids.net/

[29] https://press.princeton.edu/books/hardcover/9780691258966/ethics-of-the-algorithm

Soin de l’archive, archive du soin by Simon Apartis is licensed under CC BY-NC-SA 4.0

CIS @ EASST – 4S

What do we do about open science platforms?

In July, 2024, researchers from CIS – Simon Apartis and Ramya Chandrasekhar – joined their usual band of collaborators, to organise a closed panel at the EASST-4S conference in Amsterdam.

This conference marked the 2024 quadrennial joint meeting of the European Association for the Study of Science and Technology (EASST) and the Society for Social Studies of Science (4S). The theme of the conference was ‘Making and doing transformations’ in an era of grand societal challenges.

As part of this conference, Simon and Ramya helped co-organise and presented their research in a closed panel titled ‘Open Science Platforms: Empowering the digital transformation of science?‘ Open science platforms are transforming research workflows, audiences and their practices, and eventually science as a whole as well as its relation to society. Looking at these open science platforms, the panel sought to open up a broader discussion on the practices of making and doing openness in science.

The panel was convened by Marcel Wrzesinski (Humboldt University Berlin, Humboldt Institute for Internet and Society). The panelists presented findings from case studies relating to institutional repositories, a major open access platform, citizen science, gray policy literature, and clinical trial data sharing platforms – to illustrate how the meaning of “open science” is always ever-renegotiated by stakeholders within the field. The panelists discussed theoretical reflections, empirical material, and social interventions from different perspectives as a means to problematize open science platforms, recognise their situatedness within the knowledge economy and their variegated governance practices, and propose strategies for openness and inclusivity.

This panel follows from an earlier gathering at the annual CIS conference of 2023, held in Paris. Simon Apartis and Simon Dumas Primbault convened a panel on ‘Assessing impact, sharing control : analyzing multi-stakeholder relationships in open science infrastructures‘ – which marked the start of Savoir Partages, a research project that curated eight public discussions on the open use of science and knowledge sharing in the digital age in a variety of social contexts, involving actors from civil society, doctors, scientists, entrepreneurs, activists, autodidacts, editors, etc., and multiplying exchange and production formats.

Authors

Simon Apartis | Research Engineer (PathOS)
Ramya Chandrasekhar | Legal Researcher
Ramya Chandrasekhar | Legal Researcher (ODECO)

[EN] SAvoirs PArtagés 5 / 8

Digital commons and open hardware

<previous | Presentation | Interventions | Media | Coordinations | suivant>


Partners

INSHS


Introduction


Interventions

1. Mattei Gheorghui

Summary

2. Evelyne Lhoste

Summary

3. Anaïs Bloch

Summary


Media

A podcast will be made on site during the day.


Coordination

Inno3 | Celya Gruson Daniel | Graduate of the École Normale Supérieure, with a master’s degree in cognitive neuroscience and a doctorate in social sciences (information and communication sciences and science and technology studies), her interdisciplinary background allows her to finely grasp the various conditions of production and use of data and scientific publications in digital knowledge regimes.

She frequently acts as a teacher in various research institutes and higher education institutions, providing training in digital methodologies (data collection, analysis, and management). Today, she leverages her diverse skills in consulting and research project for Inno3, following an open research-action approach.

OpenEdition Lab | Simon Dumas Primbault | Simon is a CNRS junior professor at OpenEdition and Aix-Marseille Université, associate researcher at the Bibliothèque nationale de France, and associate research at the Laboratory for the history of science and technology (EPFL).

His research lies at the intersection of science and technology studies, ethnography, and media studies, and endeavours to shed light on the diversity of users and practices on Open Science platforms. More specifically, his current project is on the articulation between navigation practices and the discoverability of data across audiences.

Centre Internet et Société | Simon Apartis | Simon Apartis is a researcher within the PathOS project, co-coordinator of the SAvoirs PArtagés program and project manager of the “Petite Encyclopédie de la Science Ouverte”. He is the administrator and regular contributor to this scientific blog.

He studied philosophy at Freie Universität Berlin. Before that, he worked as an administrator at CIS for one year and as a cook for two years. He is passionate about the intersections of queerness and jewishness and the issue of transmission of the Shoah and the Algerian War. He has also translated Siegfried Kracauer’s early work, Sur l’amitié.