Stewarding a greater ecology of accountability in the age of AI

Trinity College Dublin’s Artificial Intelligence Accountability Lab (AIAL) is founded and led by Dr Abeba Birhane. The AIAL studies AI technologies and their downstream societal impact with the aim of fostering a greater ecology of AI accountability.

About Us

AI technologies that extend (and not hinder) justice, equity, and fundamental rights are foundational to societal benefits and human flourishing. Ensuring those developing and deploying AI technologies are held accountable is instrumental to support these goals.

The AIAL is an independent research lab with a mission to ensure that AI technologies work for the public, particularly those at the margins of society, who tend to be disproportionately negatively impacted. Research and product development in AI currently benefits a few powerful actors, reinforces systems of power, and exacerbates and widens inequities. Furthermore, existing governance structures and regulatory processes tend to overlook the concerns and fundamental rights of those who are likely to be disproportionately negatively impacted by ubiquitous integration of AI technologies into society. Read more…

Our mission

The AIAL is dedicated to ensuring that the wider AI ecology — from research and product development, to regulation — centres public interest, particularly, the most marginalised and disenfranchised in society.

Research excellence and technical rigour are paramount to us. As is research with practical implications that serve the disproportionately negatively impacted. Thus, we partner and collaborate with research centres, civil society, and rights groups across the globe. These collaborations, active conversations, and allyship will give our work the necessary weight and inertia to advance the laboratory’s central mission of asserting rights. Driven by concerns that affect the most marginalised, we strive to uncover, document, and study AI technologies that pervade society in order to:

  • challenge and dismantle harmful technologies;
  • inform evidence-driven policies;
  • hold responsible bodies accountable; and
  • pave the way for a future marked by just and equitable AI.

Research

At the AIAL, we have a comprehensive view of AI accountability. Some of the most stagnant issues we currently encounter are rooted in extractive technological ecologies and oppressive capitalist structures.

Our inquiries into AI accountability, therefore, span from studies of large systems, structures, and ecologies (such as the AI field itself and regulatory processes) to executions of audits and evaluations on specific AI models, tools, and training datasets. Read more…