Independent Technology Researchers, journalists and activists who surface and confront politically charged topics such as extremism, disinformation or online hate increasingly face heightened threats, including smear campaigns, doxxing, coordinated harassment, institutional pressure, and legal intimidation. Similarly, the proliferation of AI systems means increased (both in intensity and quantity) graphic, violent, and/or otherwise disturbing content (such as child sexual abuse material). Whether expected – as part of researching harmful content, for example – or encountered unexpectedly, both politically contentious and emotionally taxing work carries professional costs, as well as personal and emotional trauma.
Civil society organisations, advocacy and rights groups, and academic researchers that have had some of the most significant impact in keeping powerful bodies accountable have been navigating this challenging space. Independent researchers are facing more challenges than ever before yet resources and support for such work is almost non-existent.
In challenging times like these, communities play a critical factor. Communities of care offer support, solidarity, and concrete resources, such as legal support networks and training in security and resilience practices.
We – the AI Accountability Lab (AIAL) a research lab that is tackling politically charged and emotionally taxing topics, and the Coalition for Independent Technology Research (CITR), a coalition that defends the rights of independent researchers to carry out their work without fear of retribution – have been navigating this challenging space. Subsequently, in this blogpost, we have assembled our resources and lessons learned, both from our own experiences and other researchers and civil society organisations who are working on politically sensitive and emotionally costly issues. This is a living document and by no means an exhaustive list. We see it as just the beginning of a more comprehensive and robust list. We welcome all to reach out and share further tools and links to help build out this document from intersectional perspectives and learnings. Details are at the end of this post.
Anticipating risk and emotional harm
An important lesson from independent technology researchers working on emotionally taxing and politically charged AI research is that anticipating risk early matters. Most importantly, having institutional safeguards such as ethics approval and data protection impact assessments before embarking on research is critical. Prior legal review of research outputs that are public-facing help safeguard against campaigns of legal intimidation and harassment. It is also important to recognize that intimidation and harassment are often structural tactics designed to isolate researchers and create hurdles in critical inquiry. Collective preparation and institutional support significantly shape what researchers are able to withstand.
Routine exposure to hateful and violent content often leads to burnout and secondary trauma – this is a predictable outcome of emotionally demanding research environments. And so, more researchers are now treating emotional wellbeing, privacy and security, and peer support as core components of research design and practice.
This video captures this reality clearly, highlighting how researchers working on algorithmic harms, online hate, surveillance, and platform accountability are increasingly exposed to intimidation and harassment.
We do not need to navigate these risks and challenges alone and see community and collective action as essential to overcoming the isolation that adversarial and political actors seek to create.
Collective infrastructures
Most of these risks are difficult to navigate alone, thus it is important to work in pairs or teams. Furthermore, given that such demanding work tends to be high-stakes for democratic process, the rules of law, and accountability, the onus to ensure wellbeing and safety shouldn’t be put on individual researchers alone. Collective infrastructures and coalitions are key to building systems of checks-and-balance that provide emotional, financial and legal protection to researchers, tech workers, investigative journalists and civil society orgs.
Resources
Institutional safeguard and legal support
-
Institutional support is critical and the first point of safeguard. Thus, prior to embarking on the research, you should ensure to:
-
Carry out a data protection impact assessment and consult your organization’s designated data protection or privacy officer as an initial point of contact.
-
Seek ethics approval from the appropriate institutional ethics review body.
-
Remain up-to-date with required research integrity, data protection, and privacy training.
-
-
For external support with data protection impact assessment, take a look at AWO.
-
In the worst case scenario, research that unveils individual actors or bodies at the heart of harmful political ideology can incur defamation lawsuits. In such cases, you may want to seek legal support for a pre-publication review of any public facing documents for anything defamatory. CITR can connect you with legal experts that can do this pro bono.
Digital security and privacy
-
Access Now helpline is a critical hub for resources on how to protect yourself online. They offer a helpline for those who find themselves facing digital threats.
-
Digital Forensics Lab offers specialised support, training and threat analysis against digital threats.
-
This document by the Centre for Research and Evidence on Security Threats provides some practical tips on online safety for researchers on how to manage risks from a security and privacy standpoint.
Tackling harassment and intimidation
-
For US-based academic Institutions, the Researcher Support Consortium's Toolkit offers useful resources for protecting and supporting researchers from campaigns of intimidation and harassment.
-
Pen America's Online Harassment Field Manual can help you prepare for any potential harassment. The "prepare" section of this guide comprises of three key components:
-
tightening your digital security
-
managing your privacy, and
-
establishing supportive online communities who will have your back.
-
-
The Association of Internet Researcher's Risky Research Guide contains a comprehensive list of useful resources on researcher protection and safety in one place.
Researchers’ wellbeing
-
This Digital First Aid Kit provides a curated set of resources for individual researchers and teams, outlining strategies to protect personal and collective wellbeing when experiencing overwhelm.
-
This short piece by Bellingcat details practical tips on how to minimize mental distress and trauma for those exposed to extremely graphic images, particularly in the context of war.
-
Developed by the Centre for Research and Evidence on Security Threats, this document deals with emotionally demanding research, and this document provides an overview of emotional hygiene when working in politically sensitive topics.
-
This UK-oriented toolkit by UCL, UKRI, and the University of Exeter provides some practical strategies for preventing/minimizing stress and emotional trauma that might arise due to emotionally taxing research.
-
The Researcher Wellbeing Project (RWP) provides a detailed manual for addressing researcher distress, trauma and secondary trauma from researching emotionally challenging topics.
-
In this short paper, Burrell et al. draw on their own experience of working on topics such as child sexual abuse material and to offer recommendations that may help increase researchers’ emotional resilience to the challenges.
-
This paper provides some guidelines on how to handle, prepare and publicly communicate academic research on harmful, toxic, or abusive content.
Examining harm in politically sensitive research
- This academic paper by Joe Whittaker et al. discusses the harms that online extremism researchers face, and offers suggestions. And this report by Elizabeth Pearson et al. covers similar discussions.
About the Authors
The AI Accountability Lab (AIAL) is a research lab hosted in the ADAPT Research Centre and the School of Computer Science and Statistics in Trinity College Dublin. The lab studies AI technologies and their downstream societal impact with the aim of fostering a greater ecology of AI accountability. We are driven by the urgent need to demystify, critically assess, and publicly communicate the operations and functionality of AI systems by looking under the hood as well as their downstream impact on the public. This work is supported by the European AI & Society Fund.
CITR brings together public interest technology researchers from across sectors such as academia, civil society, and journalism to advocate for and defend research that is ethical, transparent and privacy-preserving. CITR welcomes new members and the application is a quick process. CITR will provide support to independent researchers in times of crisis, regardless of membership status. Email info@independenttechresearch.org with any questions or suggestions for additional resources for this document.