Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

The Open Web As AI’s New Playground: Legal Frictions

Reflections from a virtual workshop conducted by Ramya Chandrasekhar

As foundation AI models (such as large language models and generative AI) are becoming more commonplace, it is also important to study the data sources used to build and train these data-intensive technologies. The quality of training data is significant determiner of the quality and accuracy of the outputs of these AI models. And one important source of training data is the open web – which includes data in the digital public domain, openly licensed data and content, as well as data that is publicly-accessible but may be subject to certain legal rights (like contracts, copyright and data protection).

Despite being the ‘open’ web, data and content on the open web is not ‘open’ for full and free re-use by all. Data and content from the open web is re-used by big technology companies to train proprietary AI models, with little to no value being provided by these actors back into the open web ecosystem or for maintaining open data resources. Cultural resources made available via the open web are also being appropriated by these actors, exacerbating digital colonialism. At the same time, content creators are increasingly adopting restrictive interpretations of intellectual property rights to limit techniques like web scraping, which can be helpful in limiting extraction from the open web by large technology companies, but also adversely impacts other stakeholders such as researchers who rely on web-scraped data. In other words, the ‘openness’ of the open web is facing new risks and challenges.

The Centre for Internet and Society of the CNRS (CIS-CNRS), together with support of Inno3 (an ODECO Partner Organisation) and the Open Knowledge Foundation, organised a 3-hour virtual workshop on November 23, 2024. The objective was to bring together practitioners, researchers and civil-society organisations working on AI, open data and open-source to discuss two questions: (i) what are the legal frictions involved in the re-use of data from the open-web to train foundation AI models, and (ii) what data governance strategies (including legal, technical and social measures) can help address these frictions and enable responsible re-use of open web data. This workshop builds on research undertaken by Ramya Chandrasekhar (ESR 4) during her ODECO professional secondment with Inno3 in 2024.

The workshop saw participation from individuals working in South Africa, USA, Canada, France, Germany, Singapore, Poland, Italy and United Kingdom. Some of these individuals were also involved in developing new licensing frameworks for AI training datasets. The workshop was very interactive, with participants undertaking discussions in both break-out rooms and during plenary. The break-out rooms were facilitated by Ramya Chandrasekhar of CIS-CNRS and Celya Gruson-Daniel of Inno3.

Participants identified many types of legal frictions – relating to copyright, data protection, website terms of use, compliance with open licenses, proliferation of open licenses, and training data transparency. The participants used a miro board, to identify the specific stages in the AI lifecycle where each legal frictions manifests.

Figure 1: Miro board of a break-out room facilitated by Ramya Chandrasekhar, displaying different legal frictions and the point at which these legal frictions manifest in the AI lifecycle


The workshop also yielded several illustrations of data governance initiatives – ranging from new licensing frameworks, new institutional structures for data re-use, and new technical measures such as web protocols for registering opt-outs from web-scraping.

Figure 2: Miro board of a plenary discussion, where participants hierarchised legal frictions based on urgency, and discussed on-going data governance initiatives

The inputs from the workshop will feed into a public report authored by Ramya Chandrasekhar. The report will be co-published by CIS-CNRS, Open Knowledge Foundation and Inno3. Stay tuned for more!


Cross-posted from the ODECO blog.

Open data and commons literature

Some ruminations on why I turn to commons literature to answer the questions of ODECO.

Commons can yield productive insights from an ecosystemic perspective

Central to ODECO is an ecosystemic perspective. ODECO starts from the premise that the generation and use of open data is not linear, but depends on multiple intersecting forces of competition and collaboration between various actors. In this regard, ODECO identifies 8 actor groups, and studies the roles played by these actors – non-specialised users, journalists, students, local government, regional/national government, NGOs, companies and open data intermediaries.

Literature and research from the digital commons lend themselves well to this ecosystemic approach. Digital commons research has yielded useful empirical research methods as well as a rich analytical framework to ‘see’ the different technical, social and institutional components of a complex resource ecosystem.

The realization of value from open data is a collective action problem

The other central aspect of ODECO is the question of how to realise value from open data.

The realization of value from open data as a common goal of all actor groups in an open data ecosystem. The challenge however, is in establishing a common understanding of what ‘value’ is, rendering the goal of value realization a shared but obfuscated goal. Collective action theory allows us to study why actors attribute different meanings to a common goal, and propose the incentives for heterogenous actors to act in the interest of long-term sustainability as opposed to short-term individual gains.

Further, the inherent nature of open data as a ‘constructed’ good is also important here.  Comparing open data to other information systems is useful here. The “good” in question is not open data per se, but “the functionalities that it affords, and the willingness (interests) and capabilities (resources) of the users to take advantage of those affordances.” (Constantinides and Barrett, 2015). This means that the generation, use and maintenance of open data is more sociotechnically dependent on the “heterogeneity of interests and resources of a distributed user base.” (Id.)

Commons literature is again well-suited to studying collective action problems, and proposing institutional mechanisms for resource management that are not centralized in either the state or in market-actors. Decentralised governance from open access commons (particularly commons-based peer production) can yield especially insightful research in this regard.

Commons-based governance recognizes the relationality of open data

Here, a quote from Purtova and van Maanen (2024) resonates strongly with me –

“The core strength of the commons literature in our view lies in its problem analysis. What distinguishes the commons from other classifications of data as an economic good is the ecological thinking that acknowledges the complexity of the data-related problems and draws attention to the broader societal, technical and economic context of production and use of data in connection to broader societal goals. Data commons push us to think about data-related problems and solutions in terms that are beyond data. This feature is observable to some extent in all versions of the data commons we reviewed but is especially apparent in Ostrom-inspired analyses reviewed under ‘Information – or data commons for broader societal goals’ which employ ecological thinking about resources to be governed and problems to be solved.”

This is crucial for the study of open data not as an end in itself, but as a means to something else – a more just, open, equitable society. The focus should not only be the generation of more open data or the turn towards more open data-driven decision making. Instead, the focus should be on who benefits from open data, who is missed out, and who decides.

Bibliography

Constantinides, P., & Barrett, M. (2015). Information Infrastructure Development and Governance as Collective Action. Information Systems Research, 26(1), 40–56. doi:10.1287/isre.2014.0542

Purtova, N., & van Maanen, G. (2023). Data as an economic good, data as a commons, and data governance. Law, Innovation and Technology, 16(1), 1–42. Doi:10.1080/17579961.2023.2265270


High value datasets – What can the EU learn from India?

In the European Union, the concept of “high value datasets” has been introduced by way of the Open Data Directive.

  • Recital 66 of the Open Data Directive states that certain open government datasets are associated with “important socio-economic benefits”.
  • The definition of “high-value datasets in Article 2(10) further elaborates by stating that these datasets are “associated with important benefits for society, the environment and the economy, in particular because of their suitability for the creation of value-added services, applications and new, high-quality and decent jobs, and of the number of potential beneficiaries of the value-added services and applications based on those datasets.”

Based on these attributes, the Open Data Directive requires public bodies to make these datasets available for re-use free of charge (in most cases), available as bulk downloads and accessible through APIs, and machine-readable.

In December 2022, the European Commission passed a regulation specifying different types of high-value datasets from 6 sectors – geospatial, earth observation and environment, meteorological, statistics, companies and company ownership, mobility.

Tim Davies has written about this “strong economic frame” adopted in the definition of high-value datasets. He writes that the benefits of open data cannot always be quantified by adding up the revenue of firms who use open data. Instead, value is realized in other ways too. For instance, he identifies a couple of other ways in which value is realized from open data – fostering risk reduction, increasing internal efficiency and innovation, enabling exercise of rights, realizing value through network effects, redistributing surplus value. As a result, he notes that we need new calculative logics to capture these types of value realisations.

The view from India

In India, the equivalent of the Open Data Directive is an executive policy known as the National Data Sharing and Accessibility Policy, 2012. This policy does not mention high-value datasets.

However, the vocabulary of high-value datasets became introduced into Indian law and policymaking since 2020. For instance, a parliamentary expert committee was set up in 2019 to recommend regulatory frameworks for non-government data. This committee released a report in December, 2020, which introduced the term high-value datasets. BUT, the committee defines high-value datasets as datasets that are “beneficial to the community at large and shared as a public good.” The report provides some illustrations, which ofcourse includes datasets that have the potential to create more jobs or enable more innovations. But the report also identifies datasets that are relevant for citizen engagement, poverty alleviation, financial inclusion, skill development and divert and inclusion as high-value datasets.  A later report released by NASSCOM – the National Association of Software and Service Companies in India – echoes a similar broad understanding of high-value datasets.

This illustrates a more balanced approach to high-value datasets in India – one that combines the economic value of open datasets with their social value.

Screenshot of India’s data portal showing high-value datasets as of September 11, 2024, 12:09 PM

At present, India’s open data portal hosts more than 15000 high-value datasets. Datasets relating to tuberculosis treatment outcomes, expenditure and progress of road construction projects in rural areas, public spending on welfare schemes, and tax revenue of the federal government – to name a few. This illustrates a different more socially-conscious approach to implementing high-value datasets. And in doing so, it offers a knowledge transfer opportunity for EU policymakers.

(Cross-posted from https://odeco-research.eu/?p=4126)


Open data licenses and use restrictions

One of the defining characteristics of open data is that is it free to use and re-use. Legal claims of copyright or the sui generis right over datasets make re-use difficult. Open data licenses allow dataset creators to provide pre-facto authorisations for re-use of their datasets. This helps contribute more open data to the ecosystem.

There are many types of open data licenses today. Some are created by government bodies, and applied to public sector information. An example is France’s License Ouverte – created by Etalab (the French department that manages France’s national open data portal). Some open data licenses are created by non-profit or advocacy organisations. The Community Data License Agreement managed by the Linux Foundation is one example. The Open Data Commons licenses managed by the Open Knowledge Foundation is another example. And then there are licenses originally devised for creative content, but apply to datasets as well. The Creative Commons licenses (with the exception of CC-NC) are an example.

The history of open data licenses is intimately connected to the open source and open science movements. Advocates of Free and Open Source Software (FOSS) like Richard Stallman firmly believed in four freedoms that were sought to be protected through open source licenses – the freedom to run a software program from any purpose, the free to study the program (by being provided access to the source code), the freedom to redistribute copies, and the freedom to distribute copies of modifications. These ‘freedoms’ translated to open data licenses as well.

From this perspective, one point of tension in open data licenses, is the concept of use-restrictions. For example, a CC BY-NC license does not allow re-use of the licensed material for commercial purposes.

With increasing re-use of copyrighted material as training data for Large Language Models, a new licensing framework known as ‘Responsible AI Licensing’ (RAIL) has emerged. In RAIL, licenses impose certain ethical use restrictions on datasets, software and models in the AI context. These include harmful use of AI models for generating personal data, harming minors, engaging in fully automated decision-making that has adverse effects on an individual’s legal rights, or exploiting the vulnerability of a particular group of people.

Most RAIL licenses were developed for software and AI models. AI2Impact licenses developed by the Allen Institute for AI extend their ethical licensing framework to training datasets as well. In fact, the Allen Institute released a training dataset known as Dolma, containing 3 million token of web data, under an AI2Impact license in 2023 (but changed the license to CC-BY in 2024).

Strict adherents of the open data movement would argue that use restrictions detract from the very essence of openness, as they limit a particular type of re-use. But on the other hand, certain uses of open data can have harmful effects on individuals and communities.

As artists around the world argue in legal claims against GenAI, the use of their creative content licensed under an open license to create a GenAI model which produces a very similar output to the creative works of such artists raises economic challenges for the artists as well as ethical challenges. This has led to some proponents of open data and open source to rally around Responsible AI Licenses, which contain some ethical use restrictions. So where do we draw the line? Should open data licenses should be revamped to include some kinds of use restrictions? Or is this against the fundamental idea of openness?

One set of reflections on these questions can emerge from the history of the open data movement. There is growing scholarship on how transparency was understood by the open data movement in its early days as part of open government initiatives, and how this has changed over time. This scholarship also engages with liberal and neoliberal conceptions of transparency. Engaging with this historical literature can help us situate open data licenses within the specific context in which they were created, and then evaluate whether this context has changed and whether therefore the licenses need to be revamped. For instance, perhaps open data licenses should be grounded in an understanding of transparency as in/visibility, or of transparency as observability, or as I explore in my forthcoming research, of openness as processes of selective revealing.

(Cross-posted from https://odeco-research.eu/?p=4064)