At the AIAL, we have a comprehensive view of AI accountability. Some of the most stagnant issues we currently encounter are rooted in extractive technological ecologies and oppressive capitalist structures. Our inquiries into AI accountability, therefore, span from studies of large systems, structures, and ecologies (such as the AI field itself and regulatory processes) to executions of audits and evaluations on specific AI models, tools, and training datasets. We are also invested in conceptual and critical work that advances frameworks and theories of change that underpin algorithmic audit, model evaluation, and meaningful accountability. We recognize that AI accountability research is most impactful when it can inform the public, impacted groups, and policy makers. Thus, we aim for active policy translation of our (as well as field wide) research.
See our Projects, Publications and Policy work.
Current Projects
A Quality Assessment of Public Summaries published under AI Act Article 53(1)(d)
The AI Act's Article 53(1)(d) requires General-Purpose AI (GPAI) model providers to "make publicly available a sufficiently detailed summary about the content used for training ... according to a template provided by the AI Office". We evaluate the quality for this documentation across two aspects: Transparency and Usefulness and assign a score using our developed methodology. To assist GPAI Providers, the AI Office, and stakeholders, we also work on providing recommendations. see more
Recent Publications
You can see all our publications here.
Dick A. H. Blankvoort, Harshvardhan J. Pandit, Maximilian Gahntz
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2026. (open-access)
Associated Media:
- Website: Project website with analysis of public summaries
- Data & Repo: Github
- Editorial: (Tech Policy Press) How Big AI Developers are Skirting a Mandate for Training Data Transparency
News Coverage (1 articles)
- Euractiv (2026-03-02)
Gareth Young, Helen Husca, Harshvardhan J. Pandit
Games and Culture, 2025. DOI: 10.1177/15554120251409051 (open-access)
Harshvardhan J. Pandit
Annual Privacy Forum (APF), 2025. DOI: 10.1007/978-3-032-07574-1_6 (open-access)
Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, Abeba Birhane
Nature, 2025. DOI: 10.1038/s41586-025-08972-6 (open-access)
Associated Media:
- News and Views: Computer-vision research is hiding its role in creating ‘Big Brother’ technologies
- Video: Is AI powering Big Brother? Surveillance research is on the rise
- News: Wake up call for AI: computer-vision research increasingly used for surveillance
- Editorial: Don’t sleepwalk from computer-vision research into surveillance
News Coverage (10 articles)
- MSN.com (2025-06-26)
- a4 Note.com (2025-07-03)
- Pais (2025-06-25)
- NewsBreal (2025-06-26)
- Nature (2025-06-25)
- Science (2025-06-26)
- The Regist (2025-06-25)
- LatestLY (2025-07-02)
- Tech Xplor (2025-06-25)
- BEM (Sci (2025-07-05)
Recent Policy Work
The AIAL supports community initiatives, civil society efforts, and contributes to policy making initiatives at national (Irish), EU, and international (e.g. UN) forums. You can see all our policy work here.
13 Nov 2025 | Signatory to letter/petition organised by European Digital Rights (EDRi)
14 Oct 2025 | Signatory to letter/petition organised by European Digital Rights (EDRi)
23 Sep 2025 | Signatory to letter/petition organised by European Digital Rights (EDRi)
10 Sep 2025 | Feedback on policy consultation organised by Dept. of Taoiseach - Govt. of Ireland
9 Sep 2025 | Signatory to letter/petition organised by Academic Community