AI technologies that extend (and not hinder) justice, equity, and fundamental rights are foundational to societal benefits and human flourishing. Ensuring those developing and deploying AI technologies are held accountable is instrumental to support these goals.
Research and product development in AI currently benefits a few powerful actors, reinforces systems of power, and exacerbates and widens inequities. Furthermore, existing governance structures and regulatory processes tend to overlook the concerns and fundamental rights of those who are likely to be disproportionately negatively impacted by ubiquitous integration of AI technologies into society.
There exists widespread enthusiasm from big tech companies, AI vendors, and government bodies alike to integrate AI technologies into decision making processes. This emerges from the belief that doing so accelerates positive social transformation, societal benefits, and human flourishing in the long term. Yet, AI systems are integrated hastily into numerous social sectors, most of the time without rigorous vetting. As a result, AI systems built on social, cultural, and historical data and operating within such a realm tend to diminish fundamental rights, and keep systems of authority and power intact. As a result, they benefit a handful of corporations, and exacerbate and widen inequity, rather than contribute to societal benefits, positive transformation, and human flourishing. We believe rigorous research and empirical evidence are the antidote to most of the current issues plaguing the AI industry, by holding responsible bodies accountable for the adverse consequences, and ushering meaningful transformative change.
The AIAL is an independent research lab with a mission to ensure that AI technologies work for the public, particularly those at the margins of society, who tend to be disproportionately negatively impacted. The AIAL is dedicated to ensuring that the wider AI ecology — from research and product development, to regulation — centres public interest, particularly, the most marginalised and disenfranchised in society. Research excellence and technical rigour are paramount to us. As is research with practical implications that serve the disproportionately negatively impacted.
Thus, we partner and collaborate with research centres, civil society, and rights groups across the globe. Driven by concerns that affect the most marginalised, the AIAL leverages empirical evidence to: inform evidence-driven policies; challenge and dismantle harmful technologies; hold responsible bodies accountable; and pave the way for a future marked by just and equitable AI.