Training data is key to foundation AI development – particularly Generative AI models and Large Language Models (LLMs). And a significant portion of this training data comes from the open web.
But despite being lauded as a digital commons, the open web is not open for all. It is difficult to ‘see’ data flows when data and content from the open web is reused to create training datasets, and as these training datasets then move through the various stages of AI development. Legal and policy initiatives for data governance in the AI context often understand data flows as “traceable, stable and contained”, when in reality, data re-use is an “inherently entangled phenomenon”.
Over the course of 2024, Ramya Chandrasekhar from CIS (as part of the ODECO project), collaborated with Inno3 and the Open Knowledge Foundation to investigate legal entanglements of re-use, when data and content from the open web is used to train foundation AI models. Based on conversations with AI researchers and practitioners, an online workshop, and legal analysis of a repository of 41 legal disputes relating to copyright and data protection, the research report highlights tensions between legal imaginations of data flows and computational processes involved in training foundation models.
Three takeaways from the research report:

A three-dimensional framework for data openness of training datasets.
While techno-legal openness is necessary, this report argues that the political economy of data re-use also necessitates legal strategies that impose certain limits on data extractivism by well-resourced actors like Big Tech on the one hand, and enable community data sovereignty on the other hand.

A repository of 41 ongoing legal controversies relating to copyright and data protection related to training foundation AI models.
The report contains this repository, as well as a detailed analysis of how these legal controversies either impact or advance three-dimensional data openness of training datasets.

A critical analysis of existing open licenses, permissive licenses, as well as certain alternative licensing frameworks for training datasets.
While these licensing frameworks impose more obligations on re-users and necessitate more collective thinking on interoperability, these licensing frameworks together with other legal and institutional changes are nonetheless necessary for the creation of healthy digital and data commons, to realise the original promise of the open web as open for all.
Read the full report on HAL.
Or download the full report here:
OpenEdition vous propose de citer ce billet de la manière suivante :
Ramya Chandrasekhar (27 mars 2025). Legal frictions for data openness: Reflections from a case-study on re-use of the open web for AI training. Open Knowledge. Consulté le 12 mai 2025 à l’adresse https://doi.org/10.58079/13kt7