Our Work

At the AIAL, we have a comprehensive view of AI accountability. Some of the most stagnant issues we currently encounter are rooted in extractive technological ecologies and oppressive capitalist structures. Our inquiries into AI accountability, therefore, span from studies of large systems, structures, and ecologies (such as the AI field itself and regulatory processes) to executions of audits and evaluations on specific AI models, tools, and training datasets. We are also invested in conceptual and critical work that advances frameworks and theories of change that underpin algorithmic audit, model evaluation, and meaningful accountability. We recognize that AI accountability research is most impactful when it can inform the public, impacted groups, and policy makers. Thus, we aim for active policy translation of our (as well as field wide) research.

See our Projects, Publications and Policy work.

Current Projects

GPAI Training Transparency
A Quality Assessment of Public Summaries published under AI Act Article 53(1)(d)
The AI Act's Article 53(1)(d) requires General-Purpose AI (GPAI) model providers to "make publicly available a sufficiently detailed summary about the content used for training ... according to a template provided by the AI Office". We evaluate the quality for this documentation across two aspects: Transparency and Usefulness and assign a score using our developed methodology. To assist GPAI Providers, the AI Office, and stakeholders, we also work on providing recommendations. see more
Terms of (Ab)Use
An Analysis of GenAI Services
We analyse the terms of generative AI services from the perspective of an EU-based consumer and share our findings that reiterate known issues as well as surface new ones unique to GenAI services. The implications of these practices are severe, as we find consumers suffer from lack of necessary information, significant imbalance of power, and have responsibilities they cannot materially fulfil without violating the terms. We make concrete recommendations for authorities and policymakers to urgently upgrade existing consumer protection mechanisms to tackle this growing issue. see more
AI Companions: "Friendship" without Boundaries? A systematic investigation of how AI companion apps attract and retain users as a potential pathway to emotional dependence and psychological harm, and how such practices can be addressed through policy intervensions.
Algorithmic Auditing: a Justice-oriented Framework A justice-oriented auditing framework for orienting audit work in protecting the interests and welfare of the most marginalised in society while maximising accountability toward bodies that create and deploy AI systems, and grounded in the values, perspectives, and epistemologies of the historically and socially marginalised.

Recent Publications

You can see all our publications here.

A Quality Assessment Framework for GPAI Model Public Summaries required by AI Act Article 53(1)(d)
Dick A. H. Blankvoort, Harshvardhan J. Pandit, Maximilian Gahntz
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2026. (open-access)


Associated Media:
- Website: Project website with analysis of public summaries
- Data & Repo: Github
- Editorial: (Tech Policy Press) How Big AI Developers are Skirting a Mandate for Training Data Transparency

News Coverage (1 articles)
  1. Euractiv (2026-03-02)
Terms of (Ab)Use: An Analysis of GenAI Services
Harshvardhan J. Pandit, Dick A. H. Blankvoort, Dick A. H. Blankvoort, Sasha Luccioni, Abeba Birhane
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2026. (open-access)


Associated Media:
- Website: Project website
- Data & Repo: Github
Are You Being Played? — Video Games as a Lens for AI Ethics and Data Politics
Gareth Young, Helen Husca, Harshvardhan J. Pandit
Games and Culture, 2025. DOI: 10.1177/15554120251409051 (open-access)

Simple now, Complex later: The Questionable Efficacy of Diluting GDPR Article 30(5)
Harshvardhan J. Pandit
Annual Privacy Forum (APF), 2025. DOI: 10.1007/978-3-032-07574-1_6 (open-access)

Computer-vision research powers surveillance technology
Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, Abeba Birhane
Nature, 2025. DOI: 10.1038/s41586-025-08972-6 (open-access)


Associated Media:
- News and Views: Computer-vision research is hiding its role in creating ‘Big Brother’ technologies
- Video: Is AI powering Big Brother? Surveillance research is on the rise
- News: Wake up call for AI: computer-vision research increasingly used for surveillance
- Editorial: Don’t sleepwalk from computer-vision research into surveillance

News Coverage (10 articles)
  1. MSN.com (2025-06-26)
  2. a4 Note.com (2025-07-03)
  3. Pais (2025-06-25)
  4. NewsBreal (2025-06-26)
  5. Nature (2025-06-25)
  6. Science (2025-06-26)
  7. The Regist (2025-06-25)
  8. LatestLY (2025-07-02)
  9. Tech Xplor (2025-06-25)
  10. BEM (Sci (2025-07-05)

Recent Policy Work

The AIAL supports community initiatives, civil society efforts, and contributes to policy making initiatives at national (Irish), EU, and international (e.g. UN) forums. You can see all our policy work here.

Open Joint Letter on the Digital Omnibus on AI Preserving the Scope and Integrity of the AI Act
8 Apr 2026 | Signatory to letter/petition organised by European Consumer Organisation (BEUC)
AIAL's Comments for the Consultation on Digital Omnibus (Digital Package on Simplification)
13 Mar 2026 | Feedback on GDPR, ePrivacy Directive lawmaking organised by European Commission's Directorate-General for Communications Networks, Content and Technology
The EU must uphold hard-won protections for digital human rights
13 Nov 2025 | Signatory to letter/petition organised by European Digital Rights (EDRi)
AIAL comments on EDPB Guidelines 3/2025 on the interplay between the DSA and the GDPR Version 1.1
31 Oct 2025 | Feedback on GDPR, DSA lawmaking organised by European Data Protection Board (EDPB)