AI technologies that extend (and not hinder) justice, equity, and fundamental rights are foundational to societal benefits and human flourishing. Ensuring those developing and deploying AI technologies are held accountable is instrumental to support these goals. The AIAL is an independent research lab with a mission to ensure that AI technologies work for the public, particularly those at the margins of society, who tend to be disproportionately negatively impacted.
Research and product development in AI currently benefits a few powerful actors, reinforces systems of power, and exacerbates and widens inequities. Furthermore, existing governance structures and regulatory processes tend to overlook the concerns and fundamental rights of those who are likely to be disproportionately negatively impacted by ubiquitous integration of AI technologies into society.
There exists widespread enthusiasm from big tech companies, AI vendors, and government bodies alike to integrate AI technologies into decision making processes. This emerges from the belief that doing so accelerates positive social transformation, societal benefits, and human flourishing in the long term. Yet, AI systems are integrated hastily into numerous social sectors, most of the time without rigorous vetting. As a result, AI systems built on social, cultural, and historical data and operating within such a realm tend to diminish fundamental rights, and keep systems of authority and power intact. As a result, they benefit a handful of corporations, and exacerbate and widen inequity, rather than contribute to societal benefits, positive transformation, and human flourishing. We believe rigorous research and empirical evidence are the antidote to most of the current issues plaguing the AI industry, by holding responsible bodies accountable for the adverse consequences, and ushering meaningful transformative change.