Findings
Terms of (Ab)Use: An Analysis of GenAI Services
Our findings show the necessity to discuss the following: (1) Structure & Accessibility; (2) Service Quality; (3) User Rights, Responsibilities, & Liabilities; (4) Provider Benefits & Responsibilities; (5) Applicable Laws & Consumer Rights. We discuss their implications in the analysis section.
Annotation Results
| ID | Topic | Claude | DeepSeek | Gemini | Copilot | Le Chat | ChatGPT |
|---|---|---|---|---|---|---|---|
| Company | Anthropic | DeepSeek | Microsoft | Mistral | OpenAI | ||
| S1.7 | documents | 3 | 3 | 2 | 5 | 1 | 7 |
| S1.8 | free vs paid | same | same | same | same | same | same |
| S2.4-S2.5 | QA | no | no | no | no | no | no |
| S6.A | I/O label | Materials | n/a | Content | Content | Data | Content |
| S6.B | personal data? | yes | yes | yes | yes | yes | yes |
| S3.3-S3.5, S4.3-S4.5 | I/O restrictions | yes | yes | yes | yes | yes | yes |
| S6.C | reverse engineering allowed? | no* | no* | no* | no* | no* | no* |
| S3.6 | input rights retained | yes | yes | yes | yes | yes | yes |
| S4.6 | output rights given | yes | yes | yes | yes | yes | yes |
| S3.7, S4.7 | training by provider | yes | yes | yes* | yes | yes | yes |
| S3.8, S4.8 | user control | opt-out | opt-out | opt-out* | opt-out | opt-out | opt-out |
| S4.19 | training by user | no | yes | no | no | no | no |
| S3.9, S4.9 | other purposes | yes | yes | yes | yes | yes | yes |
| S3.10-S3.13, S4.10-S4.13 | third party sharing | yes | yes | yes | yes | yes | yes |
| S3.14 | input liability | user | user | user | user | user | user |
| S4.14 | output liability | user | user | user | user | user | user |
| S3.16, S4.16 | filtering/detection | yes | yes | yes | yes | yes | yes |
| S3.17, S4.17 | violation causes suspension or termination | yes* | yes | yes | yes | yes | yes |
| S6.E | human involvement | yes | n/a | yes* | yes | n/a | yes |
| S4.18 | technical risks | yes | yes | yes | yes | yes | yes |
| S4.20 | harms to user | n/a | n/a | n/a | n/a | n/a | n/a |
| S5.1 | applicable jurisdiction | Ireland/EU | China* | Ireland/EU | Ireland/EU | EU* | EU* |
| S5.2 | mentioned laws | n/a | n/a | n/a | n/a | GDPR, AI Act | n/a |
| S5.3 | local laws apply | yes | no | yes | yes | yes | yes |
| S5.4 | restriction on courts | no | China* | no | no | France* | no |
| S6.D | restrictions unless laws apply | yes | n/a* | n/a | yes | yes | yes |
Structure & Accessibility
(1) Users were presented with the same information regardless of whether they were free or paid customers or they were individual or enterprise customers for all services.
(2) All providers except Mistral had multiple documents that were part of the terms, and additionally Google and Microsoft had terms that also involved other services and products from their portfolio which made it impossible to determine which terms only applied for use of their GenAI services.
(3) Additionally, each term contained sections meant for enterprise customers, which needed to be identified and ignored for analysis of consumer sections.
(4) We also faced difficulties in identifying the correct terms for Copilot as the linked URL was redirected to a generic marketing page rather than directly to the terms, and we confirmed this was the case on multiple devices, browsers, and operating systems. A similar difficulty occurred in the terms for Gemini, which mentioned the possibility for specific terms and pointed to a list which contained `Gemini Apps' which again redirected us back to the same terms.
(5) Only Google explicitly provided a PDF version of the terms, though this did not contain all relevant documents, and Mistral's terms could be exported directed by virtue of being in a single page. For all other terms, we had to take several manual steps to identify and archive each relevant document.
Conclusions: All terms had issues regarding structure and accessibility of content which warrants a discussion on implications.
Service Quality
| Topic | Claude (Anthropic) | DeepSeek (Deepseek) | Gemini (Google) | Copilot (Microsoft) | Le Chat (Mistral) | ChatGPT (OpenAI) |
|---|---|---|---|---|---|---|
| Service functionalities | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Performance Assurance | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Stability Assurance | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Service change without notice | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Accuracy Assurance | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Warranty disclaimers | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
(1) No providers provided a clear definition or description of their service regarding specific features or functionalities being provided to the user despite this being a necessary part of the terms to indicate what the service and payments entail.
(2) All terms mentioned that the terms may change at any time and that the consumer would be given a notice period of 30 days to accept or leave, while also mentioning the service may change without prior notice.
(3) No terms described the quality of the service or provided quality assurances, instead warranty disclaimers had explicit statements on not providing assurances regarding performance, fitness for purpose, errors, or defects.
Conclusions: We consider the lack of service functionalities, changes to service without notice, and use of warranty disclaimers as important for discussing the implications.
User Rights, Responsibilities, & Liabilities
| Topic | Claude (Anthropic) | DeepSeek (Deepseek) | Gemini (Google) | Copilot (Microsoft) | Le Chat (Mistral) | ChatGPT (OpenAI) |
|---|---|---|---|---|---|---|
| Input/output restrictions | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Users input liability | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Provider input liability | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| User output liability | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Provider output liability | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Underlying model user access | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Guidance for responsibilities | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Violation causes suspension | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Violation causes termination | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
(1) All terms had specific prohibitions regarding input/output which included legal prohibitions (e.g. illegal content, copyright, intellectual property rights) and safety prohibitions (e.g. producing harmful content or misinformation).
(2) All terms except Google's also had explicit prohibitions regarding `jailbreaking' and reverse engineering, which would mean that users had no access to the underlying models and must utilise only the provided interfaces.
(3) None of the terms clarified what inputs or outputs are exclusively permitted or restricted them to a specific scope.
Conclusions: This meant all terms only had prohibitions on what inputs and outputs were invalid, and did not indicate what the service is suitable or designed for in terms of a specific set of inputs or outputs or scenarios.
(1) For inputs provided by users, all terms mentioned that the users had sole responsibility to ensure the inputs meet the stated requirements and prohibitions.
Similarly, all terms also mentioned the user was solely responsible for ensuring all outputs produced through the service met requirements and prohibitions.
(2) In addition to stating responsibility, each terms document also had a disclaimer that made the users liable for both inputs as well as outputs, in particular regarding third party grievances.
(3) No terms provided a rationale or explanation or a link to resources for users to understand and control how their inputs resulted in a responsibility and liability over outputs, especially since the users could only access the underlying model through the provided service.
(4) All terms mentioned consequences of violating the terms, including the defined restrictions and prohibitions, as resulting in potential suspension of the service or termination of the contract. Claude's terms had a third possibility of degrading the service, though it did not clarify what this implied.
(5) No terms provided a clear criteria or assessment of violations regarding severity, or which specific violations would result in a suspension and which would cause termination, though all terms clarified consequences on subscription fees in case of contract termination.
Conclusions: We found users having liability for inputs while providers having no liability for outputs, and the lack of guidance to users are important finding in discussing implications.
Provider Benefits & Responsibilities
| Topic | Claude (Anthropic) | DeepSeek (Deepseek) | Gemini (Google) | Copilot (Microsoft) | Le Chat (Mistral) | ChatGPT (OpenAI) |
|---|---|---|---|---|---|---|
| Training on input/outputs | ✓ | ✓ | N/A* | ✓ | ✓ | ✓ |
| Users must opt-out | ✓ | ✓ | ✓* | ✓ | ✓ | ✓ |
| Data still used after opt-out | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Inputs/outputs as personal data | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Inputs/outputs for advertising | ✓* | ✓* | ✓* | ✓ | ✓* | ✓* |
| Third-party sharing | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Automated filtering/detection | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Moderation involves humans | ✗ | N/A | ✗* | ✗ | N/A | ✗ |
(1) All service providers except DeepSeek make no distinction between input and output data, and instead compound them under a single label (‘Materials’ (Claude), ‘Content’ (Gemini, Copilot, ChatGPT), ‘Data’ (Le Chat)).
(2) All service providers with the exception of DeepSeek also explicitly treat both inputs and outputs as personal data.
(3) All providers, except Google, explicitly asserted that the user inputs and outputs would be used for training GenAI models by the provider, and the only option provided to users was to opt-out.
(4) Anthropic's terms specifically stated that even if the user opts out, if the user gives feedback (e.g. by clicking the thumbs up/down buttons in chat), their conversations would be used for training and the user had no control over this.
(5) Copilot (Microsoft) was the only service whose terms indicated that inputs and outputs may be used for advertising, including forms that involve third-parties, tracking, and profiling.
(6) All terms mentioned inputs and outputs being shared with third parties -- where by third party we mean any entity other than the user and the service provider.
(7) Only Mistral provided a detailed list of third parties with their roles.
All terms mentioned that inputs and outputs would also be used for other purposes beyond service provision and training, such as research and development, fraud management, and analytics.
(8) All providers asserted that rights over provided inputs were retained by the user and granted rights over produced outputs.
(9) However, all terms explicitly prohibited use of output data for training by the consumer, with the exception of Deepseek, who allowed the use of outputs for training and any other purpose by the user.
(10) All terms mentioned use of automated filtering and detection mechanisms, but did not provide details as to the specific methods used.
(11) The terms of Anthropic, Microsoft, and OpenAI mentioned human involvement in review of inputs and outputs, and those of Anthropic, Google, Microsoft, and OpenAI provided the option of human intervention for suspension or termination decisions.
Conclusions: All findings except inputs/outputs being personal data and the use of automated filtering are significant for discussing implications.
Applicable Laws & Consumer Rights
| Topic | Claude (Anthropic) | DeepSeek (Deepseek) | Gemini (Google) | Copilot (Microsoft) | Le Chat (Mistral) | ChatGPT (OpenAI) |
|---|---|---|---|---|---|---|
| Applied Jurisdictions | IE/EU | CN* | IE/EU | IE/EU | EU | EU |
| Local laws apply | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ |
| Restriction on resolution | ✗ | CN* | ✗ | ✗ | FR* | ✗ |
| Ambiguous restrictions | ✗ | ✗* | ✗ | ✗ | ✗ | ✗ |
(1) All service providers except DeepSeek clarified the applicable jurisdiction as being the EU and acknowledged the consumer's local laws as applicable, with Anthropic, Google, and Microsoft specifically mentioning Ireland as the jurisdiction.
(2) DeepSeek was the only service that stated only the laws of China applied regardless of the consumer's location.
(3) Only Mistral mentioned specific laws in the terms (EU's GDPR and AI Act) while other terms referred to laws broadly e.g., consumer laws or vaguely applicable laws.
(4) Mistral (in France) and DeepSeek (in China) were the only terms that limited where arbitration and legal proceedings could take place, which affects where users should go to complaint against the provider and where the proceedings will take place.
(5) All providers except Google included statements that referred to vague applicable laws without specifying the exact laws referred to or confirming whether they apply to the current consumer.
(6) Anthropic states the user `may have legal rights', while DeepSeek and Mistral include a blanket statement about the terms not affecting any consumer rights.
(7) All terms except Google included a warranty disclaimer that explicitly stated the service is being provided on an `as is' basis without assurances regarding quality, fitness for purpose, accuracy, and reliability. Of these, only DeepSeek's disclaimer did not include a statement clarifying its limitation based on applicable law.
(8) The prohibitions provided by Anthropic and OpenAI were phrased as applicable unless restricted by law with others phrased it as applicable to the extent permitted by law. None clarified exactly what was applicable and to what extent.
(9) Of specific interest, only Mistral had requirements regarding the AI Act, according to which the `customer' was prohibited from reporting any `serious incident' to an authority unless required by applicable AI laws, which is supposed to be an obligation for providers and deployers under the AI Act, but the terms did not distinguish this from the individual consumers.
Conclusions: The issues regarding applicability and ambiguity of laws are significant for discussing implications.