Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

CIS-CNRS @ Workshop on data governance for Open Source AI

Ramya Chandrasekhar and Renata Avila were invited by the Open Source Initiative, to a workshop on Open Data and Open Source AI.

The workshop was held in Paris, from October 10-11, 2024. It was organised by the Open Source Initiative and Open Future, and it was hosted by Linagora’s Villa Good Tech.

The workshop brought together participants from research, civil society and technologists – including from Creative Commons, the Open Knowledge Foundation, Common Crawl, Mozilla Foundation, GovLab and GIZ Fair Forward.

The objectives were to discuss data governance challenges in open source AI, and solutions to address data extractivism while preserving openness.

More information about the workshop can be found in a blogpost by the Open Source Initiative.

The workshop resulted in a white paper, authored by Alek Tarkowski of Open Future and co-designed with all workshop participants.

The white paper identifies the challenges in governing data that fuels open source AI. The white paper also offers a blueprint for a data ecosystem rooted in fairness, inclusivity and sustainability. Access the white paper here.

Peer reviewing issues : novelty, correctness and objectivity amidst obfuscated practices

Enka Blanchard

This short note was written as a consequence of noticing reviewing issues in a conference 1 — which will not be explicitly mentioned here2. After a first fruitless exchange with the PC chairs and a second exchange with the steering committee members, I noticed that more fundamental issues were at hand and deserved being made explicit. The goal is not to publicly attack anyone but to clarify these issues and hopefully change opinions and protocols (for this venue and maybe others).

But first, we must go back to basics, to the source of the system — before looking at how it can get perverted, and how the argumentation behind this change is akin to many that we observe in discussions of AI and algorithmic justice.

Why peer-review 3?

What is the goal of peer-reviewing, the justification behind a process which is both imperfect and still expensive (in time and effort if nothing else)? The most commonly stated objective is to ensure that the articles that end up published are “correct”, or rather, that they follow a reasonable methodology and give solid evidence for their assertions — with the criteria varying highly between mathematics, exact and social sciences, and humanities. This reviewing step is often what separates science from non-science — both in our opinions as scientists and as a social practice.

Beyond deciding whether the research is correct (or at least methodologically sound4), a second objective exists (although it is more rarely made explicit), related to the first : gauging an article’s importance and novelty. Guessing from the reviews whether readers will read (and cite) the paper, and whether they will consider it important enough to keep increasing the journal’s prestige (as well as its lobbying power to get subscriptions from academic institutions).

In many prestigious instances, the reasoning goes as follows : once the reviews come in, the editors or chairs start by removing the papers with obvious flaws. Then, they choose the most popular ones among the rest — which are all presumably correct. This is far from perfect, for two different reasons. First, the most popular ones tend to be the ones claiming the most surprising results (the novelty factor), which have a relatively high chance of not being replicable, stemming from statistical “luck”, mistakes or fraud, with prestigious journals in particular suffering from this problem. Second,  whether an article is accepted or not is in many ways a question of luck, as shown in the famous NEURIPS study5.

Although this is already far from ideal, more serious concerns appear when we start looking at less prestigious venues, especially the ones on the lower end of the scale. This is no attack against them : any new venue tends to start there, and I strongly believe in the value of small conferences and non-selectivity (when it comes to importance rather than correction, as we’ll see shortly). Smaller venues suffer from higher risk : if the conference has too few attendees, it might not be eligible for funding and could be canceled or leave organisers in the red6. Having more presenters (even if the research is of lower quality) guarantees more attendance and a longer programme, which can seem a good idea. However, it creates an incentive to disregard serious issues in some papers to be able to pad the list.

We should remember that this is already a best case scenario which leaves aside a lot of serious issues. The profit motive, for one, especially when the conference has high registration costs7 (without even going into the question of predatory venues). One can also mention the issues of biased reviewers (who can guess whose work it is and either accept as a favour or reject to promote their own school of thought), and even conferences choosing to artificially reject a lot of excellent papers8 because it is a necessary condition to obtain a good rating/ranking from external observers (such as CORE), with the rejection rate being often used as a proxy for the venue’s quality. But let’s get back to peer-reviewing.

Separating correctness and novelty

My training comes from a field quite specific in its practices (mathematics), in which correctness can be hard to evaluate. Or rather where being sure of it takes hard work and a lot of time. Evaluating the methodology of an empirical paper and making a thorough review often takes me a few hours at most (and sometimes can be done in 20 minutes). Mathematical papers with proofs can take upwards of a week of full-time work to check (and that’s for ones that are much much simpler than works like Perelman’s or Mochizuki’s — whose papers’ “evaluation” is still underway and extremely contentious after 12 years). This is, thankfully, not the general case (or peer-review simply would not work at all, rather than partially).

This very high cost of evaluation is one of the reasons why I now support open reviewing, as well as decorrelating two different aspects in our reviews. The first is the correctness, as above, which is the most time consuming part. The other is the novelty and importance, which can often be gauged by skimming the paper : if it draws the reader in and makes them want to reuse the ideas or build up on the paper, that’s a good sign. Some venues already only select for correctness (such as PLoS One, which suffers from a nasty profit motive however, charging between 1k$ and 2.3k$ for online publication, although this is still quite below PLoS Medicine).

There is a fundamental difference between correctness and novetly/importance. The first is, presumably, close to objective (as long as Feyerabend’s disciples are not around). Either flaws exist or they don’t, which depends on what is accepted as common methodological practice but evolves slowly. The second one is much more fluid and subjective, depending on what interests the community. Any given venue should hopefully agree in its assessment of the correctness, albeit the novelty and importance can legitimately vary to a large extent.

Aggregating scores

I have once been in a PC where the set of papers that was accepted exactly corresponded to the set of papers whose aggregate rating was above a given threshold. This was surprising to quite a few PC members and was mostly considered to be a random outcome as multiple papers were debated and scores played at most a small role. However, it also reflects that the reviewers already had had time to change their notes and converge on most papers, at least the non-consensual ones.

Aggregating the ratings (with accept typically being equivalent to “+2” and reject “-2”) can give a first order approximation, which can potentially be made a bit more “accurate” by pondering each note by the reviewer’s confidence. However it is just that : an approximation. The strength of peer-reviews lies in the review, not the grade given. If we understand correctness and novelty as two different dimensions, the flattening to a single metric becomes a problem, which we’ll see below by comparing the a naive “algorithm” that uses aggregate rating and a more refined one that uses the qualitative aspects.

Let’s consider a paper with two reviews : one of them judges the paper’s methods to be flawed and gives a reject (the importance being irrelevant). The second reviewer misses the flaw and considers that the claim that the paper gives a cure for cancer is of utmost importance, giving an “accept”. If we average out the scores, it becomes a borderline paper, and has a chance at being published. In a prestigious venue, even a single negative review can doom the paper so the problem rarely comes up. But in ones that have to accept less consensually excellent papers, the novelty factor can trump the correctness — whereas the latter should remain the primary consideration in peer-review.

Adding more reviewers can actually make matters worse. More reviewers per paper imply more reviews, more time spent, and less time spent per paper. It also means a higher tendency to ask reviewers to judge science that lies beyond their field of expertise. And any single reviewer that does their job with limited rigour introduces more risk of accepting papers with fatal mistakes. This is especially true as we generally consider that a “reject” needs to be motivated with a detailed review, whereas “this paper is excellent, we should accept it” can sometimes be considered a full review.

Running the numbers

To give a more practical example, let’s thus consider a world where papers are judged only by the aggregate metric (with grades from -2 to +2), with the paper being accepted if it has a non-negative score. Let’s also consider that each flaw has a 1/2 chance of being noticed by reviewers on average (which is more generous than my recent observations in multiple PCs), in which case the paper gets an automatic reject, and that otherwise the rating is random (which is a strong component, at least for importance, see footnote 5).

With two reviewers, a correct paper has a probability 15/25=60% of being accepted. On the other hand, a flawed paper gets in with probability 25%. With three reviewers, this becomes 57.6% and 16.2%. This is not great but not catastrophic. If the probability of noticing the flaw goes to 1/4 however, the flawed paper probability of getting accepted jumps to 41.3% and 34.4%. We can also consider adding a reviewer who always says accept, with probability 1/4 for example. With two reviewers, a correct paper will then get accepted with probability 77.5% and a flawed one with probability 57.8%.

Now let’s consider not simply aggregating the notes but looking at the reviews in details. Any reviewer who sees the real flaws pointed by others would presumably align their views (or at least, a vast majority should), whereas judgements of importance should be left mostly untouched (or rather, in a symmetrical fashion). Hence, if a single reviewer catches a mistake, the paper gets rejected, otherwise it depends on random judgements as above. This would not change the probability of getting accepted for correct papers, but the probability for flawed ones would drop : 15% with two reviewers and 1/2 chance of finding the flaw (33.8% if the chance of finding the flaw is 1/4). Respectively 7.5% and 25.3% with three reviewers. Even with the lazy reviewer, as long as they still update their reviews, the probabilities remain manageable with more reviewers : 53.3% for two, 22.6% for three and it decreases quickly afterwards.

Reasonably, this gives us a simple algorithmic process for limiting the publication of flawed papers. Look at the reviews qualitatively and if any flaw is found, ask the other reviewers to confirm and update (or show that the flaw was due to a misinterpretation, which can happen). Once this is done, consider that the average grade probably reflect the importance and, if truly needed, use an averaging method. Although a better way would be to have a PC discussion to also allow questions of which themes should be encouraged or how to give more opportunities to some types of works/younger researchers. And when in doubt, if a reviewer finds fatal errors that the others can’t explain away, reject the paper.

The case at hand

The algorithm above is what I’ve seen performed and how I tend to work. Even in conflictual cases (especially in social science journals), where concerns over methodology can be harder to arbitrate, it works quite well in my experience. This brings us to the present, however, and to a conference for which I’ve been reviewing for a few years. All the problems I’ll point to in what follows were presumably already present, but I did not or could not notice them at the time, which also explains how I chose to act. I was given three papers to review. All of them were flawed in my opinion, albeit to different extents. The worst offender featured — among multiple other issues — a user study meant to evaluate how usable and easy to understand a design was. Except that the participants in the study were the system designers themselves (and they did not give it a great grade, proof of their honesty maybe but also a worrying indicator). I wrote a detailed review of the paper pointing out its flaws and that it contained no real science (which would not have been fixable by the conference deadline), and was not the only one to do so. I did not advocate for strong reject (except in the PC-only comments) as that was restricted to plagiarism/out-of-scope. However, two other reviewers decided to accept the paper, with minimal reviews. My comments were left unread and unheeded so I left it in the hands of the PC chairs, thinking that they would follow the reasonable algorithm. The other two papers I also rejected, although not as harshly. I believe that this was mostly a question of (bad) luck and not of strict standards as I’ve previously written good reviews for this conference. Thus, it came as a great surprise when I saw that all three papers had been accepted. I followed protocol and messaged the chairs to get an explanation, which brings us to the final part of this piece.

Objectivity and algorithmic justice

After sending a detailed complaint to the multiple PC chairs, I received a single email from the main chair, featuring the following (as excerpts to protect anonymity) :

We trust […] the power of collaborative decision making (we focused on the average review score for each submission, weighted by reviewer’s confidence). […]
The acceptance [for the conference] was taken by focusing on submissions with nonnegative average review scores (weighted by reviewer’s confidence).

I was once again quite surprised by this decision, as I thought the paper could have fallen between the cracks, and did not think its acceptance was due to a strict adherence to policy. The decision method detailed above is badly flawed (especially in the presence of bad actors), as even had I put a “strong reject”, it would not have changed the outcome.

I decided to escalate and contacted all members of the steering committee with a summary of the argumentation above, expecting not that they would reject the paper (it was too late) but that they could update the visibly flawed procedures. I then received a new email which allowed me to see the extent of the methodological flaw.

I agree that many reviews are minimal and don’t give enough arguments/comments to justify the proposed decision and help improve the paper. However, relying on qualitative evaluation is unacceptable since this may lead to subjective decisions. Please note that the score is weighted with the expertise.
The discussion between reviewers of a paper is always encouraged but a meeting with all reviewers is not possible for practical reasons. We had always a delay with the reviewing process and we never had enough reviewers.

To summarise, the person responsible agreed that the reviews were of bad quality, but considered that averaging what can be considered equivalent to random noise would be better than risking making “subjective decisions”. This is compounded by the fact that the decision-making procedures were not collegial and the whole process was not transparent at all. Using algorithmic protocols to decide hard cases is an option (what happens if two reviewers disagree on whether it is correct and both are persuaded they are correct), but this is not the present usecase. Instead it’s using an algorithm to outsource the decision-making to limit accountability.

This is a pattern that people have increasingly denounced over the last decade : using an algorithm not as a way to make better decisions, but to take the decision out of human “subjective” hands, and into the machine to give it a veneer of objectivity. It is still just as subjective as before however, it’s just a step removed. Automatic decision-making from flawed data combines the worst of both worlds : it’s garbage in, garbage out, but without any clarity on why the decision is taken and a much harder time critiquing it. Moreover, for “collective decision-making” to work, some very specific constraints are required (Condorcet’s theorem, for example, requires independence among jury members, which is nearly impossible to achieve).

And yet it is a repeating pattern, especially seen in discussions on AI or about the algorithms that are used to deny people social benefits, without suffering from a clear decision-maker who can be held responsible for the massive error rates9. Making the world less accountable and more obfuscated is an issue, and it contaminates more and more fields. In research, allowing such practices to continue reduces the incentives for the reviewers to do their job correctly. It also means that authors can afford to simply send their papers many times without making changes in the hope of getting lucky once10. All of which makes the system increasingly random and worse.

We should ask why we want to publish as much as we do and how those perverse incentives work. Why is rejecting an article a problem ? Because it creates problems for the authors ? Isn’t that putting the metric above the objective that the metric was supposed to represent (the creation of good science) ? But also, considering the actual state of things, rejecting an article will not even prevent it from being published, just delay it somewhat11, and the authors might not even take the opportunity to make any modification. Many colleagues are starting to question our publication practices, which is good (see for example https://nofreeviewnoreview.org/, or the switch to diamond open-access to bypasse the profit motive in scientific publishing). However, we should also look into decreasing the workload of reviewers by rejecting the injunction to publish only in the best venues12, which pushes us to spam them in the goal of getting lucky once or twice, before settling for lower-ranked venue (thus multiplying the reviewing work done by a factor 2 to 5 for no real gain as a community).

Comments are welcome below but will be moderated before posting and anything mentioning the people responsible by name (or the name of the conference) will be deleted. The goal is not to publicly attack anyone but to change opinions and hopefully protocols (for this venue and others).

Footnotes

  1. People outside of computer science might be interested to learn that CPSCI conferences are often as prestigious as journals (if not more) and count as full publications for the usual metrics. Unlike fields where one sends an abstract and might get a quick review, the process requires the author to send a full paper as for a journal, already formatted to the conference’s expectation. The reviewing is then generally doubly blind, with exchanges online among reviewers (through platforms such as easychair) and sometimes a meeting of all reviewers to decide which papers get included in the final conference programme. Often, this includes efforts to discuss among reviewers who disagree and sometimes modify reviews somewhat, in the goal of converging towards a consensual decision, although this is not systematic. ↩︎
  2. I have made the choice of warning the PC members by email and sent them the link to this note (after I gave notice to the chairs that I would quit the PC). This choice by itself was controversial among some colleagues who were against the idea of stirring trouble, but corresponds to a deontological obligation in my opinion. Conferences and journals depend on our volunteer work and use our names and affiliations to appear serious and prestigious. As such, we should be held responsible for the use of our support, which requires transparency, such as by making such matters public (only for the ones concerned).
    Moreover, this is not just a flaw that is directly visible but required me to suffer from bad luck in the reviews to look into the decision-making process (whose obscurity is in itself problematic, but I initally attributed it to the fact that the conference was quite young and had not refined its processes yet, which could involve some manual fiddling).
    Edit on 26-06-2024 : more than a week contacting the PC (which happened before the conference which is now over) I am still to receive a reply or feedback from any of the 20 people I emailed. ↩︎
  3. It saddens me that insisting why it is relevant matters today. For another anecdote of bad practices, I received a scam invitation for a conference that seemed predatory a while ago. I saw a colleague (end of career, research director and head of a lab) was a committee member so I decided to contact her and ask if the venue was good. She said she was not sure but had accepted their invitation a while back, so I proposed looking into it. After a bit of sleuthing, I found that it was a complete scam : authors could become “invited keynote speakers” by paying enough, and the only goal was to pad CVs, with no real science behind. I mentioned as much to the colleague, expecting her to quit the PC. Instead she replied that she saw no real issue with that, and asked whether “peer-review is really that important for science”. ↩︎
  4. A common attack on peer-review is that it lets through a lot of papers with fraud or wrong data, which eventually get attacked on sites like https://retractionwatch.com/. This is a misconception : the reviewers generally cannot detect fraud, only methodological issues. Fraud detection depends much more heavily on replication, which is a different issue altogether. Although sometimes, data or analysis feature evident problems (such as when scientists proudly present multiple results with p<0.01 from a sample of 6 individuals observed once each). ↩︎
  5. Many of the issues in bibliometrics and peer-reviewing can be found in my article with Zacharie Boubli (in French) : Blanchard, Enka, and Zacharie Boubli. “Recherche et dogmatisme: de l’improductivité du productivisme.” Questions de communication 42 (2022): 255-277.
    For the central references (in English) pertaining to this very issue, a short list can be found at the bottom of the article. ↩︎
  6. I am currently trying to organise the second edition of a conference with a colleague and this is one of a few central issues : what happens if the number of papers submitted is too small or too big by a wide margin ? Both cases are problematic in very different ways, especially if one wants to keep registration costs as low as possible (where each additional attendee creates a negative marginal cost). ↩︎
  7. I have seen some with standard registration costs between 1k€ and 2k€ (such as CHI or Usenix Security). I have also seen others, with a similar quality, with costs below 100€, which sounds reasonable to me, or ones around 500€ that include lodging and food for 5 days. This discrepancy is a source of concerns in my opinion. ↩︎
  8. A notable conference guilty of this (according to my sources who participated) was ICALP, already considered one of the top if not the top international conference in its field. ↩︎
  9. This article is edifying, but is only one among too many to count :
    https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/ ↩︎
  10. To be fair, I am guilty of this for one of my papers. It came about when we realised a gaping hole in the litterature when trying to find references and decided to do an empirical study to prove what everyone thought was already shown (and we had no real surprise there). It took years to publish with at least 7 rejections, and in all but one case the paper was rejected because at least one reviewer said “We already know these results, I am sure there are papers on this”, without ever giving a reference. Demands that the complaining reviewer show this reference were always left unanswered (probably because the assumed references never existed in the first place). ↩︎
  11. A similar argument has been made by Jacob and Lefgren in 2011 on the selectivity of research funding (see the reference below). ↩︎
  12. As a matter of personal policy, I do not submit to venues with an acceptance rate under 25%, and try to avoid those who are proud of having a low acceptance rate, while favoring venues with 40-60% rates. ↩︎
Main references

Pier E. L., Brauer M., Filut A., Kaatz A., Raclaw J., Nathan M. J. et Carnes M., 2018, « Low agreement among reviewers evaluating the same NIH grant applications », Proceedings of the National Academy of Sciences, 115(12), p. 2952-2957. https://doi.org/10.1073/pnas.1714379115

Jacob, Brian A., and Lars Lefgren. “The impact of research grant funding on scientific productivity.” Journal of public economics 95.9-10 (2011): 1168-1177.

Dougherty, Michael R., and Zachary Horne. “Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences.” Royal Society Open Science 9.8 (2022): 220334.
https://royalsocietypublishing.org/doi/10.1098/rsos.220334

Cortes C. et Lawrence N. D., 2021, « Inconsistency in conference peer review. Revisiting the 2014 NeurIPS experiment », Arxiv. https://doi.org/10.48550/arXiv.2109.09774
DOI : 10.48550/arXiv.2109.09774

Bornmann L., Mutz R. et Daniel H.-D., 2010, « A reliability-generalization study of journal peer reviews. A multilevel meta-analysis of inter-rater reliability and its determinants », PloSOne, 5 (12). https://doi.org/10.1371/journal.pone.0014331
DOI : 10.1371/journal.pone.0014331

[EN] SAvoirs PArtagés 5 / 8

Digital commons and open hardware

<previous | Presentation | Interventions | Media | Coordinations | suivant>


Partners

INSHS


Introduction


Interventions

1. Mattei Gheorghui

Summary

2. Evelyne Lhoste

Summary

3. Anaïs Bloch

Summary


Media

A podcast will be made on site during the day.


Coordination

Inno3 | Celya Gruson Daniel | Graduate of the École Normale Supérieure, with a master’s degree in cognitive neuroscience and a doctorate in social sciences (information and communication sciences and science and technology studies), her interdisciplinary background allows her to finely grasp the various conditions of production and use of data and scientific publications in digital knowledge regimes.

She frequently acts as a teacher in various research institutes and higher education institutions, providing training in digital methodologies (data collection, analysis, and management). Today, she leverages her diverse skills in consulting and research project for Inno3, following an open research-action approach.

OpenEdition Lab | Simon Dumas Primbault | Simon is a CNRS junior professor at OpenEdition and Aix-Marseille Université, associate researcher at the Bibliothèque nationale de France, and associate research at the Laboratory for the history of science and technology (EPFL).

His research lies at the intersection of science and technology studies, ethnography, and media studies, and endeavours to shed light on the diversity of users and practices on Open Science platforms. More specifically, his current project is on the articulation between navigation practices and the discoverability of data across audiences.

Centre Internet et Société | Simon Apartis | Simon Apartis is a researcher within the PathOS project, co-coordinator of the SAvoirs PArtagés program and project manager of the “Petite Encyclopédie de la Science Ouverte”. He is the administrator and regular contributor to this scientific blog.

He studied philosophy at Freie Universität Berlin. Before that, he worked as an administrator at CIS for one year and as a cook for two years. He is passionate about the intersections of queerness and jewishness and the issue of transmission of the Shoah and the Algerian War. He has also translated Siegfried Kracauer’s early work, Sur l’amitié.

[EN] SAvoirs PArtagés 4 / 8

Heritagization of memories and digital commons of the archive

February 14, 2024
Seminar from 10:00 to 12:00, followed by a guided tour of the BNF from 12:00 to 13:00
Bibliothèque François-Mitterrand, site Tolbiac, Quai François Mauriac Paris 13e
Aquarium in Hall Est, small room in Haut-de-jardin
& in visioconference
Registration link

<previous | Presentation | Interventions | Media | Coordinations | suivant>


Partners

INSHS


Introduction

In a well-known passage from The Lost [1], Daniel Mendelsohn relates a no less famous episode from Virgil’s Aeneid, itself based on Odysseus’ visit to Alcinoos in Homer’s Odyssey. Following the fall of Troy, Aeneas wanders across the Mediterranean and arrives in Carthage. There, he finds frescoes on the walls of a temple depicting the Trojan War. Aeneas bursts into tears:
“For the Carthaginians, the war is just a decorative motif, something to adorn the walls of their new temple; for Aeneas, of course, it means much more, and as he stands looking at this picture, which is a picture of his life, he bursts into tears. […] What Aeneas says, as he looks at the worst moment of his life decorating the wall of a shrine in a city of people who do not know him and had no part in the war that destroyed his family and his city, is this: sunt lacrimae rerum, “There are tears in things.”

What makes the past into an archive? How to share these archives and the knowledge around, about, from, drawn from the past? What modes of collective organization and what technical tools make it possible to reclaim it, in the present of memorial struggles? On the one hand, there are the traditional institutional venues that disseminate the archive on a large scale – at the risk, sometimes, of its heritagization. There are also those who seek to further democratize its distribution and production. On the other hand, more recently, the digital commons have developped, which attempt to connect a greater heterogeneity of actor-ice-s and produce a more horizontal governance. Not to mention the “spontaneous” mode of circulation of the past on the social networks of large digital capitalist companies, such as Instagram or Facebook.

Transmitting memory involves constituting the trace as archive, moving from the spontaneity of an immanent life form to its reflexive registering and indexing. Depending on the amount of time that passes, the memory at stake, the political issues attached to it, the sociological properties of those interested in its production and uses, but also on the state of the socio-technical devices that organize the archive and its circulation, the passage from lived experience to its archived form can give rise to a multiplicity of uses, aesthetic, political, memorial, speculative, patrimonial etc.., sometimes quite different from the immediate experiential context whose traces the archive collects. This gap presents a number of risks and promises. As Enzo Traverso said of the Shoah : ” the risk is not [to forget it] but to make a misuse of its memory, to embalm it, to lock it up in museums and neutralize its critical potential, or worse, to use it as a means to an  apology of the current world order “[2].

Memorial issues are still vivid and plural in France. Just think of the memory of the Algerian War, which still haunts millions of descendants of Algerian harkis, conscripts, Muslims and Jews, and pieds-noirs [3], [4]. This recently gave rise to the Stora report [5] (followed by a few diplomatic stirrings) despite archives that are still all too often classified and inaccessible. Or to the memory of the Shoah, which was at first silenced by the resistancialist myth, and was then reframed into a moral duty and even perhaps, as some say a “civil religion” [6] and whose principle of moral edification (remembering the past so it doesn’t happen again) has been heavily criticized for its ineffectiveness [7], as much as its characteristic tendency to seem more concerned with how Jews died rather than how they’ve lived [8]. Let us think, finally, of the LGBT memory, in the throes of change, increasingly valued and captured by state-linked cultural institutions [9], [10] which, until 1982 in France, nevertheless criminalized homosexuality (something it now wishes to redress [11]).

In these three cases, the constitution and circulation of archives and memories raise numerous questions : far from being mutually exclusive, these can continue to inform contemporary identities in a cumulative way, not least because one can be queer, Jewish and Arab and therefore experience the complex coexistence of various memorial legacies within oneself:

“What relationships should we have with the dead? How do we last through time? Do AIDS, gender or sexuality have a place in the museum? How can the memory of a social and political struggle or of a minority culture be constituted and transmitted? How can we keep this memory alive, active and meaningful? What does it mean to collect a trace? What do they tell us, what do they make us do? Can the state ingest without hiccups the archives and objects of groups and individuals who fought against it or exist on its margins, and can these groups and individuals give them to it without contortion?”[12].


[1 ]          Mendelsohn, Daniel. The Lost : a search for six in six millions.
[2]
           Op. cit., p. 80
[3]
         Morin, Paul Max. 2022. « Leur guerre d’Algérie : enjeux de mémoire dans la socialisation politique des jeunes Français ». These de doctorat, Paris, Institut d’études politiques. https://www.theses.fr/2022IEPP0002.
[4]
        Branche, Raphaëlle. 2020. Papa, qu’as-tu fait en Algérie: Enquête sur un silence familial. 2020th edition. Paris: La Découverte.
[5]
          https://www.vie-publique. fr/rapport/278186-rapport-stora-memoire-sur-la-colonisation-et-la-guerre-dalgeri
[6]
          Enzo Traverso, Le passé, mode d’emploi, 2005, La fabrique
[7]
         Gensburger, Sarah, and Sandrine Lefranc. 2017. A quoi servent les politiques de mémoire? Paris: PRESSES DE SCIENCES PO.
[8]
          ”  I made a point of asking the audience during my speeches if they could give me the names of three concentration camps, and then three Yiddish authors, the language spoken by over 80% of the death camp population. Why, I asked, are we so interested in how people died, when we care so little about how they lived ? ” in Dara Horn, People love dead Jews, XXI, introduction.
[9]
           https://expo-homosexuels-lesbiennes.memorialdelashoah. org/
[10]
          https://www.centrepompidou. fr/fr/magazine/article/over-the-rainbow-de-lautre-cote-des-luttes
[11]
          https://www. radiofrance.fr/franceculture/podcasts/le-temps-du-debat/quelle-reparation-pour-les-politiques-de-criminalisation-de-l-homosexualite-4644501
[12]
        Renaud Chantraine, ” La mémoire en morceaux. Une ethnographie de la patrimonialisation des minorités LGBTQI et de la lutte contre le sida”. https://www.ehess.fr/fr/soutenance/m%C3%A9moire-en-morceaux-ethnographie-patrimonialisation-minorit%C3%A9s-lgbtqi-et-lutte-contre-sida


Interventions (10:00 – 12:00)

1. Alexia Levy-Chekroun (30 min)

Alexia Levy-Chekroun is currently preparing her PhD at the Ecole des Hautes Etudes en Sciences Sociales, under the supervision of Sebastien Tank-Storper (CéSor – CNRS). She holds a Master 2 in Public Affairs from the Université Paris 1 Panthéon-Sorbonne and a Master 2 in Political Studies from the EHESS. She also completed an academic exchange at New York University.

Summary

I will present my ongoing researchs about to the controversies and conflicts of interpretations relating to the “decolonization” of Judaism, as observed since a corpus of data shared on Instagram, by actors who self-identify as jewish. In will explore the mobilizations and processes of memorial fixation and storytelling, as well as the differentiated political uses of past Jewish experiences and forms of life in North Africa and the Middle East.

2. Renaud Chantraine (30 min)

After studying art history and museology at the École du Louvre, Renaud Chantraine defended a thesis in anthropology in November 2021 on the heritagization of LGBTQI minorities and the fight against HIV/AIDS. His investigations are based on a three years’ experience at the Musée des Civilisations de l’Europe et de la Méditerranée, an extensive fieldwork in the Netherlands and Germany, and his involvement in several activist projects – at the Collectif Archives LGBTQI in Paris and Mémoire des sexualités in Marseille. His current research, carried out as part of a post-doctoral contract (Sidaction Fellow) at SESSTIM in Marseille, proposes a mapping and analysis of the various arrangements used for making, preserving and transmitting archives and memories of the fight against HIV/AIDS.

Summary

For several years, I’ve been exchanging and working with Jean Marcel Michel, an activist involved in the 1980s and 1990s in Marseille in several associative projects at the crossroads of the homosexual movement and the fight against HIV/AIDS.

In 2013, before leaving the city and after sorting and organizing the archives linked to his commitments, he contacted the Bouches-du-Rhône Departmental Archives, who agreed to receive this collection of documents as a donation. This collection of private archives is exceptional: on the one hand, because there is no equivalent dealing with these political and social issues in the institution’s reserves; on the other hand, because it has the interest of complementing, by serving as a counterpoint, the funds from administrations and public organizations that the Public Archives already preserve.

Working on questions of heritage, memories and archives of the fight against HIV/AIDS in Marseille in particular, I contacted the Archives Départementales in August 2019, on the advice of Jean Marcel Michel in order to gain access to his archive holdings. This was the start of a series of disappointments I shared with him. Initially, we were told that the collection was not classified and therefore could not be consulted, before being informed, once the classification had been carried out, that due to the presence of documents “of a confidential nature” a delay of 50 years was imposed before the collection could be accessed.  

In my intervention, I will seek to identify and analyze the different logics – of sharing or blocking – put forward by the patrimonial institution and by the donor. The conflictual nature of this case, where archival reasoning, legal issues, feelings of anger, dispossession and censorship come into tension, also questions the conditions under which it is possible to do research today on the memory of the “AIDS years”.   

3. Discussion : Irene Bastard | (1h)

Project Manager on publics and uses at French National Librairy (BNF) | As a sociologist, I carry out studies on the public at the BNF. This work is rooted in sociological research activities, using appropriate theoretical frameworks and methods, and is integrated into the institution’s activities. The aim is therefore to explore the profiles of conference attendees, to explore the practices of researchers, to understand the paths taken by Internet users on Gallica and other websites, and to answer more specific requests of the BNF about its services and future developments.

Guided tour of the BNF (12:00 – 13:00)


Media

A podcast will be made on site during the day.


Coordination

Centre Internet et Société | Simon Apartis | Simon Apartis is a researcher within the PathOS project, co-coordinator of the SAvoirs PArtagés program and project manager of the “Petite Encyclopédie de la Science Ouverte”. He is the administrator and regular contributor to this scientific blog.

He studied philosophy at Freie Universität Berlin. Before that, he worked as an administrator at CIS for one year and as a cook for two years. He is passionate about the intersections of queerness and jewishness and the issue of transmission of the Shoah and the Algerian War. He has also translated Siegfried Kracauer’s early work, Sur l’amitié.

Inno3 | Celya Gruson Daniel | Graduate of the École Normale Supérieure, with a master’s degree in cognitive neuroscience and a doctorate in social sciences (information and communication sciences and science and technology studies), her interdisciplinary background allows her to finely grasp the various conditions of production and use of data and scientific publications in digital knowledge regimes.

She frequently acts as a teacher in various research institutes and higher education institutions, providing training in digital methodologies (data collection, analysis, and management). Today, she leverages her diverse skills in consulting and research project for Inno3, following an open research-action approach.

OpenEdition Lab | Simon Dumas Primbault | Simon is a CNRS junior professor at OpenEdition and Aix-Marseille Université, associate researcher at the Bibliothèque nationale de France, and associate research at the Laboratory for the history of science and technology (EPFL).

His research lies at the intersection of science and technology studies, ethnography, and media studies, and endeavours to shed light on the diversity of users and practices on Open Science platforms. More specifically, his current project is on the articulation between navigation practices and the discoverability of data across audiences.

[EN] SAvoirs PArtagés 1 / 8

Assessing impact, sharing control : analyzing multi-stakeholder relationships in open science infrastructures

The opening SAPA event will take place during the CIS days commencing with an academic discussion about open science impact assessment and the ecosystemic approach of open science & open data infrastructures.

Practicalities

Date : Thursday, Oct. 5th. From 3 pm to 4.30 pm
Place : 59-61 rue Pouchet and online
Panel held in english. Followed by a final discussion from 4.30 to 5.15 pm and a cocktail until 6 pm

Présentation | Interventions | Média | Coordinations | suivant >

Participants

Labs

Partners of the SAPA program

INSHS


Related CIS projects


With the special participation of


People

Guests

CIS | Ramya Chandrasekhar | lawyer and researcher. She is working on the Open Data Ecosystem (ODECO) Project – a 4-year Horizon 2020 Marie Skłodowska-Curie Innovative Training Network initiative (H2020-MSCA-ITN-2020)

Humboldt Institut für Internet und Gesellschaft | Freia Kuper | Researcher in the research project Organizational Creativity and Resilience: Exploring the Future of Educational Technology (ORC) within the research programme “Knowledge and Society

Humboldt Institut für Internet und Gesellschaft | Marcel Wrzesinski | Open Access officer and head of the BMBF project “Scholar-led Plus“. His research focuses on the governance and infrastructures of scholarly communication and distribution.

Alan Turing Institute | Bastian Greshake Tzovaras | Bastian is a Senior Researcher within the Tools, Practices, Systems programme of the Alan Turing Institute in London, where he and his team are working on how to enable the co-creation of citizen science projects.

Coordinators

Centre Internet et Société | Simon Apartis | Simon Apartis is a researcher within the PathOS project, co-coordinator of the SAvoirs PArtagés program and project manager of the “Petite Encyclopédie de la Science Ouverte”. He is the administrator and regular contributor to this scientific blog.

He studied philosophy at Freie Universität Berlin. Before that, he worked as an administrator at CIS for one year and as a cook for two years. He is passionate about the intersections of queerness and jewishness and the issue of transmission of the Shoah and the Algerian War. He has also translated Siegfried Kracauer’s early work, Sur l’amitié.

OpenEdition Lab | Simon Dumas Primbault | Simon is a CNRS junior professor at OpenEdition and Aix-Marseille Université, associate researcher at the Bibliothèque nationale de France, and associate research at the Laboratory for the history of science and technology (EPFL).

His research lies at the intersection of science and technology studies, ethnography, and media studies, and endeavours to shed light on the diversity of users and practices on Open Science platforms. More specifically, his current project is on the articulation between navigation practices and the discoverability of data across audiences.

Summary

Open science platforms make up a new type of meeting point between scientific offer and scientific demand. They increase :

  1. The variety of available scientific resources (journal articles, educational toolkits, raw and processed data, software, research notebooks),
  2. Their findability, accessibility, interoperability and reusability and
  3. The variety of possible usages and users of those resources (especially societal and economical).

These changes are both causing and caused by the recomposition of the stakeholders either contributing to, using, operating, financing and regulating those infrastructures, in a way unheard in previous knowledge regimes: governements, research funding organizations (RFOs), industry, publisher, research producing organizations (RPOs), research communities, civil societies and general public.

Proper impact assessment of open science policies is thus required to :

  1. Better understand the academic, economical and societal effects those platforms and other open data commons have
  2. Figure out whether the actual impact is indeed in line with the promoted and desired objectives
  3. Base future public policies on these evidences.

But impact assessment does only indirectly address the question of infrastructure control, especially the issue of disparities between big-scale and grassroots actors, that especially shows in so called “citizen science” and the possible solutions to mitigate power imbalances, especially by using law as an incentive.

We will discuss those questions by presenting the latest findings and research questions of two open science EU funded projects hosted at CIS : the Odeco project, presented by Ramya Chandrasekhar and the PathOS project, presented by Simon Apartis. Freia Kuper and Marcel Wrzesinski from the Humboldt Institute for Internet und Geselschaft will shade light on the discussion from their expertise on impact assessment and open science publishing. Simon Dumas-Primbault, holder of the open science chair at OpenEdition, will moderate the panel.

Full report

Media

[EN] PathOS Datasprint #1

About

The first PathOS datasprint took place at the Centre Internet et Société from 10 am to 7 pm, gathering five teams, including CIS-CNRS, OpenEdition, Ouestware, INIST, MESR and OPSCI on June 7th. The event brought together a total of 10 people.

We began with a general PathOS presentation, followed by individual presentations from each of the teams. These presentations focused on the projects they are conducting, which could connect to and be helpful for the PathOS case study. These projects either involve the study of uses of open science platforms or the analysis of connection logs to them.

In the afternoon, we dedicated our time to studying the design, requirements, and technical challenges of a log analysis tool that Ouestware will develop. We divided into two groups: one called “dream“, where we brainstormed all the desired features of the tool, and another called “reality“, where we delved into all the technical challenges associated with such a tool.

We concluded the datasprint by reconciling these two perspectives to establish a plan for a feasible tool. Three types of work phases each targetting specific objectives were planned.

Report

10:00 – 10:30 : Round table and participant’s introductions

10:30 – 10:50 : General presentation of the PathOS project and its expectations towards OpenEdition’s logs

The PathOS project is a European project focused on econometrics and the evaluation of public policies. The goal is to examine and, if possible, measure the societal, economical and academic impacts of Open Science. CIS-CNRS’ main task is in Work Package 3 (WP3), which gathers several case studies run by PathOS’ partners in different countries and on specific cases. The idea of case studies is to explore and test the impact theoretical model and indicators developed in the rest of the project, especially in WP1 (see D1.1) & WP2 (see handbook). CIS-CNRS’ case study focuses mainly on societal impact indicators.

PathOS seeks a causal approach which CIS-CNRS will try to stick to, while however taking a broader, exploratory point of view – which is more appropriate to a platform level analysis – on the usage of OS products. This involves exploring usage data (user logs) from the OpenEdition platform and in the next datasprints, from HAL and Recherche.Data.Gouv, as explained here. We will start from two previously developed projects, “Usages Alpha” and “Appropriation du savoir ouvert,” which analyzed temporal usage patterns (detection of viewing peaks) and user categorization. The results of this exploration could be compared with logs from other pirate or paid platforms, allowing for a differential understanding of the functioning of OS platforms.

10:50 – 11:40 : Introducing past and current works on the uses of open science on the OpenEdition platform

Equipex Commons | Elodie Faath | OpenEdition

Coordinated by Elodie Faath and closely linked to the OpenEdition Lab department run by Simon Dumas Primbault, this project benefits from a long term Equipex funding (2021 – 2029) and aims at better coordinating Huma-Num, a french infrastructure for digital humanities, Métopes, a platform who coordinates publishing activities of higher education organizations in France, and OpenEdition, in order to systematically link research data to the scientific publications which produce and make use of this data. The Commons project prioritizes the interoperability between OpenEdition and Huma-Num but aims more generally at interlinking all OpenEdition resources with any data hosted on other repositories outside of Huma-Num.

The Commons project also aims at modelizing the use cases of Open Science platforms, automatize the mutual recognition between them, and create a user experience of Open Science platforms that is as seamless as possible. it will thus work on the fairization of data, the opening of publications, and the upgrading of open science literacy of users and producers. For this last part, it also collaborates with the CCSD and the HAL team in order to find ways to better identify the authors of publications and data.

The observatory of uses | Simon Dumas Primbault | OpenEdition

The observatory is developed within the Commons project and will be launched at the end of year 2023. It aims at analyzing the uses of Huma-Num, Métopes and OpenEdition through quantitative and qualitative methods. The observatory will be supervised by Simon Dumas Primbault and hire two PhD students. Simon’s broader research project also encompasses a more open and exploratory dimension divided in three parts :

A historical part on selected samples of OS platforms
A semiotic part analyzing the interface of OS platforms and their affordances
A socio-ethnographic part based on qualitative and semi-directive interview of users and a quantitative analysis of user journeys on the platforms

Umberto, the unexpected reader | Pierre-Carl Langlais | OPSCI

Presentation

Log analysis is a quite ancient field of inquiry, which however has not developped as much as expected because of difficulties accessing the data. Matomo – a usage data collection software which includes bot detection and data pre-processing – has helped changing the situation.

To identify “unexpected readers,” merely observing outliers in relation to an average usage profile is inadequate. This is because OpenEdition publications exhibit diverse usage patterns, rendering a simple “average” ineffective. Utilizing a basic average for comparison results in the detection of an excessive number of outliers. Consequently, we needed to operationalize a rather intricate concept of the average usage profile.

Variations from the typical reading patterns generally align with peaks in the visitation of a specific resource. By default, our analysis excluded the first few days following publication, as they consistently exhibit heightened activity. We identified significant deviations and narrowed our focus to specific instances of them in order to try to understand their main factors.

How this project could be extended

Do readers engage differently during a reading peak as during a standard use of a resource (extended reading time, intense platform and hyperlinks use…)?
What would happen if the first day of publication of the resource were included in the analysis?
What would happen if unexpected uses were analyzed on groups of articles (gathered by topic, peak synchronicity or clustering of user experiences)?
What about including the analysis of the referrers (websites that point to the resource)?

Elinor, the institutional reader | Pierre-Carl Langlais | OPSCI

Presentation

The project aimed at using the IP of OpenEdition readers in order to categorize the readers by institutions and institutional sectors. The institutions (n=1287) have been identified by WhoIS, more or less by hand. The project detect a certain tendency to homophily (i.e. ENS readers read a lot of publications issued by ENS). We also focused on the use made by banks.

How this project could be extended

A lot of users might have installed add blockers : how can we measure their proportion?
How can we extrapolate globally available values from partial measurements?
What could it bring to analyze the device (desktop, mobile, tablet) use?
How about running interviews?

Resources
Pierre-Carl Langlais | Research Notebook | Usages AlphaJoël Gombin | GitHub | Unexpected reader detectionUmberto | Online tool for OpenEdition peak uses analysis
Matomo DocumentationPierre-Carl Langlais | Draft | The unexpected reader analysisPierre-Carl Langlais | Draft | Matomo technicalities
Romain Deveaud | Rapport | Modalités d’accès au savoir ouvertJoël Gombin | Pierre-Carl Langlais | Rapport | Usages Alpha

11:40 – 12:00 INIST-EZpaarse : what role does it play in the Couperin consortium? & presentation of the COUNTER format | Léo Félix & Thomas Porquet | INIST

Presentation

Ez-Counter

Ezpaarse is a tool that enables French academic institutions to determine (in ezMESURE) which publications (by journal and publisher) are accessed by users of their library’s online resources (authorized users). The system retrieves and aggregates usage reports provided by commercial publishers (e.g., Elsevier, Springer, etc.). Counter currently includes data from 100+ institutions out of a total of 250 French institutions.

Ez-Measure

EZ-Paarse performs the same type of analysis but is based on log data from library proxy servers of around sixty institutions in France (out of ~100 institutions equipped with a proxy). 25 institutions have installed EZ-PAARSE and automatically contribute their data. 70 institutions have manually installed EZ-PAARSE and manually contribute their data. Approximately 100 institutions have the technical infrastructure to install EZ-PAARSE. There are 250 higher education and research institutions in France.

Ez-Paarse

Allows academic institutions to consult usage statistics of their users based on Counter, ezPaarse, or both datasets

Resources
Couperin ConsortiumGitHub & Documentation | Ezpaarsehttps://bibliomap.inist.fr/Ezpaarse website

12:00 – 12:30 Presentation of the current data collection projects of the Open Science Barometer and plans for new data collections for expanded usage on a national scale

Presentation

The Baromètre de la Science Ouverte serves to equip the National Plan for Open Science team with quantitative analyses and indicators, keeping them as objective as possible. It has led to the development of several tools :

Affiliation Matcher: analyzes the affiliations of authors from all scientific publications worldwide (identified by CrossRef, Hal, etc.) to identify publications associated with French researchers (providing the most comprehensive database of French publications to date).
Detection (through full-text analysis of articles) that relies on datasets or software developed by the authors and analysis to determine if the data/software has been “open.”
Separation and calculation of institutional statistics – based on lists of publications provided by various French scientific institutions.

How this project could be extended

One could attempt to calculate the fraction of French publications using datasets or software developed by third parties (other than the authors of the publication) and shared as “open” (or not).

Resources
Baromètre National de la Science OuverteGitHub | GROBID toolDocumentation | GROBID documentation
Ministry of Higher Education | Open Data PortalMinistry of Higher Education | GitHub | Open Science Barometer

12:30 – 13:30 Lunch break


13:30 – 15:30 Parallel workshops

Group #1 : hard reality

What kind of data do we want to collect? Front or back end collection?
How do we detect bots? How do we manage ad blockers?
If we use logs that have been pre-processed by ezPAARSE, there won’t be issues with adblockers, and the only bots to sort out will be those who have not been previously signaled by the UserAgent (using DataDome, the same technology deployed on CAIRN).
Directly using the Matomo plug-in however might make it easier to deploy the solution on several platforms (for genericity).
If the logs happen through a university proxy, it could be also possible to follow a same user through several platforms.
Wouldn’t it be simpler to identify the origin of the IP at a big scale and refine the analysis from there on?
How to identify IP ranges? We could use a tool called “The IP registry” (here and here).
What data volume do we have? Raw data is : 37Go of index (4 Go compressed) . Journals : 27Go ; Books : 10Go ; Search : 300Mo. But it could be much lighter if we only extract the data we really need for the analysis.
Do we have enough server power? How much ram do we need?
What period of time to we choose? Probably 2022
It is important to keep in mind the need of interoperability of our tools between the three platforms (HAL, RechercheDataGouv).
The Matomo tracker could be configured to have richer data (but it would then only enrich data posterior to the new config)
How relevant would it be to complement this with an online survey?
What GDPR related aspects should we take into account? If we do a list of IP ranges of public organizations, will there be some confidentiality issues, how agregated should the data be?

Workshop #2 : dream

The Umberto project helped analysing and forecasting peaks of access to OS platforms.
We could compare HMTL access VS paid ePUB/PDF (focusing the analysis on freemium models)
Quantify and qualify non-academic uses of OS (especially economical)
How about user journeys and recommandations? How effective are the UX to keep the users on the platforms?
The main beneficiairies of these analysis would be OS researchers and OS public policy makers as well as publishers/editors.
The main beneficiairies of these analysis would be OS researchers and OS public policy makers as well as publishers/editors.How about user journeys and recommandations? How effective are the UX to keep the users on the platforms?
The main beneficiairies of these analysis would be OS researchers and OS public policy makers as well as publishers/editors.
We could compare HMTL access VS paid ePUB/PDF (focusing the analysis on freemium models)

Now how do we explore data logs containing information on individual publications’ use? In the logs, it is possible to look at : the temporal profile, the referers (by differentiating between referring internal to OpenEdition and external referring, ie. people coming from elsewhere in the web), by geographical origin, institutional origin, device, access type (open, susbcription-based, freemium), comparing bots VS humans, and have a look at other documents used in the course of a same session.

In the continuation of the Umberto project, detecting atypical uses could be implemented in a live or quasi-instantaneous monitoring tool that could be able to report atypical time-series, referers, institutional origin (ie. a resource with a high ratio of non-academic VS academic readers), or bot / non-bot ratio. The use data could be agregated following different criterias : venues, venue’s number, venue’s rubrics, book collection, book, document type (article, report, book, chapter, appendix…), language, standardized or author made tags, access type (open VS close, HTML vs PDF vs ePUB), referrer (we could look at all publications accessed from a given referrer), geographical origin, institutional origin.

We could also continue exploring the internal connection networks within the OpenEdition platform, as done by Romain Deveaud with his work on Markov chains : how generic hyperlinks / DOI refer to OpenEdition contents & how internal OpenEdition referers point at each other inside of OpenEdition. It could help monitoring the impact of OpenEdition’s editorial decisions and make recommandations for visitors, either handmade or fully automatic. This would help having a better editing strategy and have a good overall view on the societal impact of OS publications.

Merging dream and reality – a first action plan proposal

First phase : crossing IP, referrers and ressources

Main infos
  • Start : 2023
  • Platforms : OpenEdition, HAL
  • Dataset range : 1 month of log files
  • Questions : Users, resources, referers. Connecting refers to users and resources. What tools should we use, what barriers do we face, how should we classify each type of information, what research hypothesis do we want to test?
Description

First we have to classify and identify the IP adresses with tools such as ip-info.io, with two challenges : private IPs are not identifiable, professional IP does not mean professional use). Second, we have to classify the referrers, with two challenges : the logs do not always store them, and even when they do, the information is sometimes not disclosed by the navigator. Third, we have to analyze the resources that are being accessed, with two challenges : the authors have to be disambiguated through IdRef/ABES, a typology of disciplines/concepts has to be either created ex nihilo by scanning the abstracts or reused from another source.

Second Phase : user journeys and user experience

This phase draws direct inspiration from the above mentioned works from Pierre-Carl Langlais and Joël Gombin. Those two studies, which I have roughly summarized here [FR], set aside from their analysis of the link between user IP, referer and metadata analysis, have also developped methods to analyse both longitudinal resource use and user journeys on the OpenEdition website. They focus on visiting sessions, looking at how a given user moves from page to page, how much time he spends on each of them, and if this journey correlates with the origin referer that had them land on an OpenEdition resource.

Joël Gombin, Pierre-Carl Langlais. Usages alpha. Étude préliminaire à orientation méthodologique. Final report to ISTEX/ANR funded project Usages Alpha (ANR-10-IDEX-0004-02).
Romain Deveaud, Modalités d’accès au savoir ouvert sur les plateformes d’OpenEdition. Final report to ISTEX/ANR funded project Usages Alpha (ANR-10-IDEX-0004-02). 

Unfortunately, given the strongly econometrics-influenced approach of PathOS, that relies on a strictly counterfactual conception of causality, leaving aside exploratory approches that have a broader, processual conception of causality, we will probably not have the opportunity to pursue these endeavours yet.

Gallery

People

Mélanie Dulong de Rosnay
Tommaso Venturini
Simon Apartis
Simon Dumas Primbault
Elodie Faath
Marie Pellen
Léo Félix
Thomas Porquet
Benoit Simard
Paul Girard
Eric Jeangirard
Pierre-Carl Langlais