Étude sur les contrats d’utilisation des IA génératives

Le centre Create de l’Université de Glascow propose une nouvelle étude (white paper) qui pose un regard sur les contrats d’utilisation des outils de l’intelligence artificielle générative, déposée le 24 mai dernier dans l’archive Zenodo sous le titre Private Ordering and Generative AI: What Can We Learn From Model Terms and Conditions? Voici le résumé d’origine:

Large or “foundation” models, sometimes also described as General Purpose Artificial Intelligence (GPAI), are now being widely used to generate not just text and images but also video, games, music and code from prompts or other inputs. Although this “generative AI” revolution is clearly driving new opportunities for innovation and creativity,  it is also enabling easy and rapid dissemination of harmful speech such as deepfakes, hate speech and disinformation, as well as potentially infringing existing laws such as copyright and privacy. Much attention has been paid recently to how we can draft bespoke legislation to control these risks and harms, notably in the EU, US and China, as well as considering how existing laws can be tweaked or supplemented. However private ordering by generative AI providers, via user contracts, licenses, privacy policies and more fuzzy materials such as acceptable use guidelines or “principles”, has so far attracted less attention.  Yet across the globe, and pending the coming into force of new rules in a number of countries, T&C may be the most pertinent form of governance out there.

Drawing on the extensive history of study of the terms and conditions (T&C) and privacy policies of social media companies, this  paper reports the results of pilot empirical work conducted in January-March 2023, in which  T&C were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc), small and large sizes, and varying countries of origin. Although the study looked at terms relating to a wide range of issues including content restrictions and moderation, dispute resolution and consumer liability, the focus here is on copyright and data protection. Our early  findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradistinction to their function as content generators rather than mere hosts for third party content. This study  concludes that in light of these findings, new laws being drafted to rein in the power of “big tech” must be reconsidered carefully, if the imbalance of power between users and platforms in the social media era, only now being combatted, is not to be repeated via the private ordering of the providers of generative AI.

Source: Create (UK)

Le document d’une trentaine de pages propose deux tableau synthèse pour les questions de droit d’auteur et de vie privée / renseignements personnels des contrats d’utilisation de treize systèmes d’IA générative. Voici les champs de ces tableaux:

Analysis of copyright clauses

  • Who owns the copyright over the outputs and (if any indication is found) over the inputs?
  • If a copyright infringement is committed, who is responsible?
  • Is there any procedure in force to avoid or at least minimise the risk of copyright infringement?

Analysis of privacy policies

  • Mention CCPA rights (California), EU or UK GDPR?
  • Mention rights other than erasure explicitly, and do they give a form to claim your rights?
  • Offer an email address to claim DP rights?

J’apprécie beaucoup des efforts de ce centre de recherche en droit britannique.

Ce contenu a été mis à jour le 2024-06-10 à 9 h 20 min.