Analysis

Terms of (Ab)Use: An Analysis of GenAI Services

Based on our findings, we structured our analysis in two broad sections. First we discuss implications regarding individual consumers -- which surfaces lack of necessary information, unworkable responsibilities, and asymmetric benefits between providers vs consumers. We then discuss implications under EU consumer protection law and consumer rights -- where we show how terms are not in good faith, are imbalanced, and that this results in (potentially) unfair terms as per existing laws. Based on these, we formulated recommendations for consumer protection authorities to take action as well as to inform future rule-making, especially the EU's developing digital regime.

Implications regarding individual consumers

We considered the following processes that consumers need to undertake: (1) identify all relevant terms; (2) comprehend the scope and applicability of the service; (3) understand the roles and responsibilities; (4) formulate expectations regarding the service; and (5) identify controls and know how to operate them. Our findings show how the current practices make all of these unrealistic and burdensome even for the most astute consumer.

Lack of necessary information

  1. The structure of terms makes it difficult to identify relevant terms, and differentiating clauses that apply to individual consumers vs corporate uses is somewhat a minefield. Combined terms from platforms like Google and Microsoft create uncertainty and confusion as to which terms apply and what exactly would happen if customers use only the GenAI services.
  2. No information and assurances about the specific functionalities, quality expectations, or stability and reliability of the service are provided to the consumers -- which means a consumer is simply at the mercy of the service provider to accept whatever and however features may be provided without a clear expectation of when service degrades.
  3. Terms mentioning the service may change without prior notice without distinction between cosmetic changes such as the user interface and significant changes such as in the underlying models that constitute the bulk of a ‘GenAI service’. Consumers are thus burdened to take additional efforts to determine what quality means and to assess it in cases they suspect the service has degraded as there is no meaningful notice of the change itself or an understanding of when the change is so significant that it has contractual implications.
  4. GenAI services are marketed as general-purpose tools while also being advertised as being useful for specific cases — this likely creates an expectation of usefulness for the consumers, who assume the service has been designed to enable such uses. We find parallels here between marketing concerns such as misleading advertising with concepts in other regulations such as the AI Act's intended purpose where both depend on a clear understanding of the advertised uses to delineate between provider and deployer responsibilities.

Unworkable responsibilities put on consumers

  1. GenAI terms allocated complete responsibilities and liabilities on the user regarding the inputs, in particular assessing violation of legal concerns such as copyright and IP, and use automated mechanisms such as filtering and detection of content to determine whether the inputs are valid. As these mechanisms are `black-boxes' without clear description or ways to control or alter them, allocating liability solely upon users means they are responsible even when not knowing which content might trigger violations and risk suspension or termination.
  2. The responsibilities and liabilities for outputs are also allocated solely to the user even when such outputs may not be solely due to user inputs as they also depend on the functioning of the underlying AI model as well as the inclusion of `system prompts' that instruct the model how respond to the inputs. Consumers' only have the option to change inputs and hope for the best -- they are not provided any resources, facilities, or guidelines to understand or control how the model produces outputs.
  3. Providers take no responsibility of their own role and control over the service, which means users are made liable regardless of what training data was used by the provider, how the model was developed and configured, and how the service was setup to use the user's input with the model. A recent example of this is the use of Grok to produce child sexual abuse material (CSAM), where X reiterated that only users are responsible for outputs.
  4. Even when invalid inputs and outputs cause suspension of service and termination of contracts, which are `legal effects', only 4 providers hinted at human involvement in moderation and final decisions. Combined with a lack of warranty, this creates barriers for consumers to dispute automated decisions.

Asymmetric benefits: providers vs consumers

  1. GenAI services give themselves rights for use of inputs and outputs for training regardless of consumers paying for the service, and where this was earlier an opt-it but was changed to be an opt out. This raises concerns about awareness (e.g. whether users are aware of the training and opt-outs), burden (e.g. effort needed to exercise opt-outs), utility (e.g. whether payments create expectation of opt-out), and pricing (e.g. whether opt-outs would cost extra in the future).
  2. All providers except DeepSeek expressly prohibited using outputs for training by the consumers, which could be understood as necessary to avoid other model developers from using the outputs to train their own models. However, this is not clarified as such, which makes it an unfair trade-off for two reasons: the liability for inputs and outputs is put solely on users despite the unworkable imbalance, and there is no distinction between commercial and non-commercial or hobby uses of outputs or between free and paid consumers. Users thus are given full responsibility but restricted from using the outputs as they deem fit.
  3. Consumers are also prohibited from reverse engineering, where specific prohibitions are mentioned with the caveat that applicable laws may change the nature of such prohibitions by limiting them or removing them entirely. A typical consumer cannot be expected to be aware of the laws or its nuances, which creates an environment where users users may believe that such invalid and non-enforceable restrictions apply, and may end up self-restricting their ability to fully enjoy the service or exercise their legal rights.

EU Consumer Protection Law and Consumer Rights

Good faith in terms

`Good faith' means not taking advantage of the consumer's lack of experience and weak bargaining position. In our findings, all providers required specific restrictions and prohibitions to be fulfilled with complete responsibility and liabilities assumed solely by consumers, while the terms themselves are difficult to decipher for a typical consumer. The lack of quality metrics in terms means that consumers have no way of understanding whether the outputs they receive as part of the service are problematic or acceptable. Even if this information is available elsewhere, their lack of acknowledgement within the terms creates a burden for consumers who would have to discover them through other means. Dispute resolution is also affected as the information was not part of the terms and could have been changed without their knowledge.

Good faith requires that terms be provided in plain intelligible language that an average consumer can understand, available in a single location, and expressed legibly with no hidden small print -- whereas our findings showed these documents are burdensome to access and understand. We also found significant hurdles in discerning language used to outline restrictions, warranties, disclaimers, and similar legal provisions despite these being one of the most important clauses for consumers. The complex phrasing of these clauses without clarification of impacts also likely violates UCTD requiring `plain intelligible language' that consumers understand.

Imbalance in terms

Our findings show a clear imbalance arising from terms where a significant advantage is given to the provider without providing an equal benefit to a consumer through the allocation of liabilities, training of models by the provider by requiring consumers to opt-out and preventing them from using the outputs for training their own personal models, and also creating cases where data is used even after opting out. Imbalance is also present in the distribution of liabilities where the service provider has specific access to information and capabilities to direct the model towards specific behaviours which the consumers do not, which makes it more difficult for the consumers to control the outputs. This means the consumer takes on more liabilities in the use of the service than the service provider does so in its provision of the same service.

Potentially unfair terms

The term 'potentially unfair' is a categorisation in law as it requires an authority to determine its validity on a case by case basis. While we use this term to align with the law, we also present our arguments for where the issues we find should be considered unfair.

UCTD Annex I(1)(b) states "excluding or limiting the legal rights of the consumer [...] in the event of total or partial non-performance or inadequate performance}" as a `potentially unfair' term.

This matches our findings where no terms included specific quality metrics that would enable consumers to detect and dispute performance. Deepseek's assertion that only the laws of China would apply seems to be a clear exclusion of the legal rights of a consumer based in EU, for example to ensure fairness in contracts and regarding disputes. It also prevents consumers in EU from taking action against Deepseek without opening proceedings in China, which is contrary to EU consumer protection norms and incompatible with established legal precedents.

UCTD Annex I(1)(k) states "enabling the seller or supplier to alter unilaterally without a valid reason any characteristics of the product or service to be provided'' as a potentially unfair term.

This matches our findings where no terms provided features or functionalities associated with the service, did not provide assurances regarding quality, and indicated that the service may change at the sole discretion of the provider and without prior notice to the consumers. As the service is intended for and used by the consumer, and the validity of benefits should therefore rest on whether the benefits from changes to the service are intended or materialise for the consumers, and that the consumers are informed thereof. In addition, whether the consumer has the ability to direct the change or control when it happens is also of relevance, which our findings show is not provided for any of the GenAI services.

UCTD Annex I(1)(m) notes that provider having the sole right to determine whether services are in conformity with the terms may be considered unfair.

Since the terms do not give the consumer any information or capability to determine the quality of the service, and instead disclaim all assurances explicitly, the consumer may be at a significant disadvantage as the terms may be used to consider any defects or issues regarding reliability, performance, and use may be considered as non-applicable by the service provider. Even if the terms state they would change with a notice period, the service itself changing, including changes to quality, without prior notice or control can be seen as a change of terms. This means the consumer must rely solely on the service provider to determine whether the service provided is of a sufficient quality.