Methodology
Terms of (Ab)Use: An Analysis of GenAI Services
The goal of our study was to analyse the terms that GenAI services require consumers to accept, and to assess whether they are transparent and fair to users. To implement this, we developed a codebook and used it to identify information from the terms. We then analysed and discussed the findings to identify implications regarding consumer experiences and the law.
On this page you can find our scoping methodology, list of analysed documents, and our annotation codebook.
Scope
For this study, we limited the scope to only analyse consumer uses of GenAI services, and excluded enterprise services and features. We chose two providers each for the following criteria: established big-tech actors -- Google (Gemini), Microsoft (Copilot), emergent non-US providers -- Mistral (LeChat, France), Deepseek (China), and new prominent providers -- OpenAI (ChatGPT), Anthropic (Claude).
To identify relevant terms, we used the links provided in the web interface of each service. When the linked terms document involved hyperlinks to other documents, we only included the linked documents if they were clearly mentioned as being `part of the terms'. We specifically did not consider linked documents that were mentioned as informational or providing guidance, as our goal was to study the terms as a contractually binding provision. Following this rule, we only included `privacy policies', `acceptable use', and similar documents only when they were referenced as being part of the terms.
Analysed Terms
-
Claude (Anthropic) https://claude.ai/
- Consumer Terms https://www.anthropic.com/legal/consumer-terms (dated 8 October 2025)
- Privacy Policy https://www.anthropic.com/legal/privacy
- Usage Policy https://www.anthropic.com/legal/aup
-
DeepSeek https://chat.deepseek.com
- Terms of Use https://cdn.deepseek.com/policies/en-US/deepseek-terms-of-use.html (dated 28 April 2025)
- Privacy Policy https://cdn.deepseek.com/policies/en-US/deepseek-privacy-policy.html
- Open Platform Terms of Service https://cdn.deepseek.com/policies/en-US/deepseek-open-platform-terms-of-service.html
-
Gemini (Google) https://gemini.google.com
- Policies and Terms https://policies.google.com/ which provides link to terms
- Terms of Service https://policies.google.com/terms (dated 22 May 2024)
- Generative AI-Prohibited Use Policy https://policies.google.com/terms/generative-ai/use-policy
- Service-specific additional terms and policies https://policies.google.com/terms/service-specific which contains entry for ‘Gemini Apps’ and provides same links to Terms and Generative AI Policy as above
-
Copilot (Microsoft) https://www.copilot.com/
- ‘terms’ link https://www.bing.com/new/termsofuse?utm_source=copilot.com is redirected to landing page https://www.microsoft.com/en-ie/microsoft-copilot/for-individuals?form=MA13YT. On this page, the link to ‘Copilot Terms’ https://aka.ms/consumercopilotterms redirects to the same landing page. Another link ‘Terms of Use’ https://go.microsoft.com/fwlink/?LinkID=206977 redirects to terms https://www.microsoft.com/en-us/legal/terms-of-use (dated 2 February 2022)
- Privacy Statement http://go.microsoft.com/fwlink/?linkid=248681 which is redirected to https://www.microsoft.com/en-gb/privacy/privacystatement
- Services Agreement https://www.microsoft.com/en-us/servicesagreement
- Bing Image Creator and Bing Video Creator Terms of Use https://www.bing.com/new/termsofuseimagecreator
- Web search (using Bing) led to ‘Copilot Terms of Use’ https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/termsofuse which were not linked in any of the above pages
-
Le Chat (Mistral) https://chat.mistral.ai/
- Legal terms and conditions https://mistral.ai/terms/#terms-of-service which redirects to https://legal.mistral.ai/terms (dated 27 May 2025)
-
ChatGPT (OpenAI) https://chatgpt.com/
- Terms & Policies https://openai.com/policies/ which provides links to Europe Terms of Use https://openai.com/policies/terms-of-use/ (dated 29 April 2025)
- Service Terms https://openai.com/policies/service-terms/
- Usage Policies https://openai.com/policies/usage-policies/
- Sharing & Publication Policy https://openai.com/policies/sharing-publication-policy/
- Service Credit Terms https://openai.com/policies/service-credit-terms/
- Transparency & Content Moderation https://openai.com/transparency-and-content-moderation/
- How your data is used to improve model performance https://openai.com/policies/how-your-data-is-used-to-improve-model-performance/
- Privacy Policy https://openai.com/policies/eu-privacy-policy/ was linked but not included for analysis as the terms stated: “Although it does not form part of these Terms...”
Codebook
The codebook fields each have an assigned ID, Field/Question, columns ‘Info on main page’ and ‘Info on secondary page’ to document whether information was found in the terms document other (secondary) linked documents, Notes for annotator to record observations, and Example provide indicative phrases that may be used in terms to find this information. The two info columns contain an enumerated list of options for most questions from which annotators select the most applicable option if present, and use the notes column to record additional information.
-
Metadata
- Timestamp of annotation
- Annotator identifier
- Target
- Link to archived terms (folder)
- Additional Links found during annotation
-
Terms structure
- Single page / monolithic
- Paginated / multiple pages
- Single page with links to other non-terms pages
-
Same terms used for free or paid services? (Sample: If you
are a free user…; If you have an account with us)
- Terms apply for free services
- Terms apply for paid services
- Same terms apply for free and paid services
-
Service Provision
-
Is what constitutes as the ‘Service’ described with a specific
name? Sample: These terms relate to ChatGPT, Service is defined as
xyz
- Yes, the Service is described + text field
- No, the Service is not described
-
If the Service is described, are specific features or
functionalities described as part of the Service? Sample: By service
we mean the ability to use the model; Features such as image
generation
- Yes, the Service is described + text field
- No, the Service is not described
-
Do the terms mention how user will be informed and can control
changes to the service? Sample: We may change the Service at our
discretion without prior notice or warning; Where we change our model,
we will allow users to opt-in to the model
- Yes, the terms explicitly mention that the Service will not change unless the User enables it
- No, the terms explicitly mention that the Service will change and the User will be given a notice period
- No, the terms explicitly mention that the Service will change but do not state how the User will be informed about this
- No, the terms do not state whether the Service will change
-
Do the terms describe the Service in terms of specific quality?
Sample: We have a 99% uptime; Our models are guaranteed to provide
correct output 99% of the time Note: Mentioning the model hallucinates
is NOT a quality metric unless it is accompanied with information on
when/how much
- Yes, quality metrics are described regarding (e.g. speed of responses)
- Yes, quality metrics are described regarding accuracy (e.g. accuracy of responses)
- Yes, quality metrics are described regarding availability (e.g. uptime)
- Yes, other quality metrics are provided + text
- No, quality metrics are not provided
-
If quality metrics are described, do the terms provide any
assurances or claims or guarantees regarding them? Sample: We
guarantee xyz with %, We will provide xyz with %
- Yes, assurances or guarantee is mentioned
- No, assurances or guarantee is not mentioned
-
Is what constitutes as the ‘Service’ described with a specific
name? Sample: These terms relate to ChatGPT, Service is defined as
xyz
-
Service Usage - Inputs
- Combined concept for both inputs and outputs?
- Are inputs and outputs declared as personal data?
-
What forms or modalities can the User provide their input as?
Sample: You can provide input through the service or by uploading a
picture
- Text
- Images
- Videos
- Audio
- Not mentioned
-
What sources can the User provide their input as?
- Directly entering it into the Service (e.g. typing, uploading)
- Through the use of their Device (e.g. Camera, GPS)
- Third Party (e.g. Siri)
- Not mentioned
-
Are there restrictions regarding what input is exclusively
permitted? Sample: We only allow you to upload; We only accept
input
- Yes, the terms mention exclusively what input is permitted + text
- “Everything is permitted” is explicitly mentioned
- Not mentioned
-
Are there restrictions regarding what input is prohibited?
Sample: We prohibit content where; We do not accept input
- Yes, the terms mention prohibited input + text
- “No prohibitions” is explicitly mentioned
- Not mentioned
- Is reverse engineering explicitly prohibitied?
-
Are there restrictions regarding the scope of input such that the
Service is only described as intended for the scope?
- Yes, specific input categories are described as the scope + text
- Nothing is out of scope is explicitly mentioned
- Not mentioned
-
How do the terms describe the ownership and retention of rights
regarding the provided User input?
- User retains ownership, and Service provider is granted rights only for the Service
- User retains ownership, and Service provider is granted rights for the Service as well as any other use
- Rights are transferred from the User to the Service Provider
- Not mentioned
-
How do the terms describe the use of User input for further
training or refinement of Model by the service provider?
- Inputs will be used for training/refinement
- Inputs will not be used for training/refinement
- Not mentioned
-
If User input will be used for further training or refinement of
the Model, what options or controls does the User have?
- User has no controls
- User must opt-in and information is provided on how to do this
- User must opt-in but no information is provided on how to do this
- User must opt-out and information is provided on how to do this
- User must opt-out but no information is provided on how to do this
- Not mentioned
-
Will the User input be used for other purposes beyond training or
refinement of the Model?
- Inputs will be used for analysis or measurements regarding the Service
- Inputs will be used for research and product development
- Other + text
- Not mentioned
-
Will the User input be shared with Third Parties?
- Yes
- No
- Not mentioned
-
If User input will be shared with Third Parties, are these third
parties identified?
- Yes, the identities of Third Parties are provided
- Yes, the categories of Third Parties are provided
- Not mentioned
-
If User input will be shared with Third Parties, are the specific
purposes for why they will be shared mentioned?
- Yes + text
- Not mentioned
-
If User input will be shared with Third Parties, will it be in a
privacy preserving form?
- Yes, and specific measures are provided + text
- Yes, but specific measures are not provided
- Not mentioned
-
When submitting input, what responsibilities are allocated to the
User regarding the validity of inputs?
- Copyright violation does not occur
- Input meets safety standards and these are mentioned + text
- Input meets safety standards but these not mentioned
- Other + text
- Not mentioned
-
If the input does not meet the validity requirements, is the
resulting liability explicitly clarified?
- Yes, User assumes liability
- Yes, User and Service share liability
- Yes, Service assumes liability
- Not mentioned
-
Does the service provider use filtering or detection mechanisms
over user input?
- Yes, and specific measures are provided + text
- Yes, but specific measures are not provided
- Not mentioned
-
What happens if the filtering/detection mechanism detects a
violation or problem regarding the user input?
- User may lose access to the Service
- User assumes liability
- Other + text
- Not mentioned
-
Service Usage - Outputs
-
What forms or modalities will the output be provided as?
- Text
- Images
- Videos
- Audio
- Not mentioned
-
How can the User access the output?
- Directly through the Service (e.g. see it on the website)
- Third Party (e.g. Siri)
- Publish (e.g. public access via Service)
- Not mentioned
-
Are there restrictions regarding what output is exclusively
permitted? Sample: We will only provide; We only generate
output
- Yes, the terms mention exclusively what output is permitted + text
- “Everything is permitted” is explicitly mentioned
- Not mentioned
-
Are there restrictions regarding what output is prohibited?
Sample: We prohibit content where; We do not produce output
- Yes, the terms mention prohibited output + text
- “No prohibitions” is explicitly mentioned
- Not mentioned
-
Are there restrictions regarding the scope of output such that
the Service is only described as intended for the scope?
- Yes, specific output categories are described as the scope + text
- Nothing is out of scope is explicitly mentioned
- Not mentioned
-
How do the terms describe the ownership and retention of rights
regarding the output?
- User retains ownership with no information about rights of Service Provider
- User retains ownership, and Service provider is granted rights for the Service as well as any other use
- Rights are retained by the Service Provider
- Not mentioned
-
How do the terms describe the use of output for further training
or refinement of Model by the Service Provider?
- Outputs will be used for training/refinement
- Outputs will not be used for training/refinement
- Not mentioned
-
If Output will be used for further training or refinement of the
Model, what options or controls does the User have?
- User has no controls
- User must opt-in and information is provided on how to do this
- User must opt-in but no information is provided on how to do this
- User must opt-out and information is provided on how to do this
- User must opt-out but no information is provided on how to do this
- Not mentioned
-
Will the Output be used for other purposes beyond training or
refinement of the Model?
- Outputs will be used for analysis or measurements regarding the Service
- Outputs will be used for research and product development
- Other + text
- Not mentioned
-
Will the Output be shared with Third Parties?
- Yes
- No
- Not mentioned
-
If Output will be shared with Third Parties, are these third
parties identified?
- Yes, the identities of Third Parties are provided
- Yes, the categories of Third Parties are provided
- Not mentioned
-
If Output will be shared with Third Parties, are the specific
purposes for why they will be shared mentioned?
- Yes + text
- Not mentioned
-
If Output will be shared with Third Parties, will it be in a
privacy preserving form?
- Yes, and specific measures are provided + text
- Yes, but specific measures are not provided
- Not mentioned
-
When generating output, what responsibilities are allocated to
the User regarding the validity of outputs?
- Copyright violation does not occur
- Output meets safety standards and these are mentioned + text
- Output meets safety standards but these not mentioned
- Other + text
- Not mentioned
-
If the output does not meet the validity requirements, is the
resulting liability explicitly clarified?
- Yes, User assumes liability
- Yes, User and Service share liability
- Yes, Service assumes liability
- Not mentioned
-
Will the generated Output be subject to any form of filtering or
detection?
- Yes and details are provided + text
- Yes but details are not provided
- Not mentioned
-
What happens if the filtering/detection mechanism detects a
violation or problem regarding the outputs?
- User may lose access to the Service
- User assumes liability
- Other + text
- Not mentioned
- For automated methods related to filtering and detection of validity, will the decision involve human oversight or confirmation, and will the user have the ability to request human review of the decision?
-
Are specific ‘risks’ or ‘issues’ identified regarding the output
that would reduce the quality or cause detriments?
- Inaccuracy
- Bias
- Hallucination
- Other + text
- Not mentioned
-
Are there specific restrictions on the use of outputs for
training or refinement of outside of the Service?
- No restrictions
- User can reuse outputs for training or refinement
- User cannot reuse outputs for training or refinement
- Not mentioned
-
Do the terms inform or warn the User regarding specific harms
that may arise from the use of the service ?
- Yes, regarding risks to safety (e.g. mental health)
- Yes, regarding risks of the Service (e.g. inaccurate, unreliable)
- Yes + text
- Not mentioned
-
What forms or modalities will the output be provided as?
-
Legal
-
Is the use of the Service restricted to a specific jurisdiction?
E.g. EU/EEA
- Yes, to EU
- Yes, to USA
- Yes, to China
- No restrictions
- Other + text
- Not mentioned
-
Are specific laws mentioned governing the Service? E.g.
GDPR
- Yes, GDPR and other EU laws
- Other + text
- Not mentioned
-
Does the User’s location or jurisdiction affect the laws
applicable to the Service?
- Yes, laws in User’s jurisdiction apply
- No, only those jurisdictions mentioned in the terms apply
- Not mentioned
-
Are there restrictions on where arbitration or legal proceedings
can take place?
- Dispute resolution + text
- Arbitration + text
- Enforcement via courts + text
- Other + text
- Do the terms explicitly include statements that create exceptions based on applicable law without specifiying which laws exist and whether they are applicable for the user in this context?
-
Is the use of the Service restricted to a specific jurisdiction?
E.g. EU/EEA