From July 9th to July 11th, The Centre for Democracy and Development (CDD West Africa) and National Democratic Institute (NDI) organized the Second West Africa Conference on Countering Information Manipulation. The conference examined the possible role of AI and social media companies in creating and spreading disinformation, hate speech, and other information manipulation.
During the conference, a panel session focused on understanding the impact of disinformation on marginalized communities and strategies for building resilience. The speakers included Professor Remi Sonaiya, the only female presidential candidate in Nigeria’s 2015 general election; Kelechi Emekalam from Take 2 Media Concepts; Lami Sodiq from Daily Trust; James Ugochukwu from Nigeria Civil Society Situation Room; Lanre Lanre-Gold from The Cable; and Lukeman Salami from the Joint National Association of Persons with Disabilities.
What is Gendered Disinformation?
Disinformation is the deliberate spread of false or inaccurate information to manipulate or mislead others. Inaccurate information or news is often created to change people’s behavior by influencing their beliefs, attitudes, and perspectives. The effects are far-reaching with women and PWDs frequently taking the brunt of the outcome in acts of gendered disinformation and online hate.
Gendered disinformation is the act of weaponizing false information against women to incite abuse and violence through the spread of false gender and sex-based narratives. Such tactics act as obstacles to women journalists, politicians, and women in high public positions, aiming to silence, degrade, and humiliate them, ultimately driving them out of these high-level spaces.
Now more than ever, gendered disinformation has become more realistic and widespread with increased accessibility to generative AI tools like DALL-E, Reface, and Midjourney which can be used to generate false imagery or pornographic deep-fake videos of women. These techniques have been used to blend, replace, or superimpose images and videos, to create controversial deep fake videos depicting politicians and others saying things they never actually said or engaging in activities that never occurred.
Remi Sonaiya provided a brief insight into the impact of gender disinformation by sharing her experiences during her political career, where she faced online attacks targeting her personal life. Following her brief recount, it was unanimously agreed that the development and lifecycle of AI tools should incorporate insights and participation from women and marginalized groups.
On a global scale, the American singer and songwriter, Taylor Alison Swift, fell victim to AI image manipulation where an AI-generated image depicting the celebrity in a compromising position was shared on X. The image attracted almost 45 million views for the entire day the post was up before the platform took action and suspended the user behind the post. Unfortunately, the damage had already been done.
Potential solutions include consistently involving victims and affected communities in discussions about AI model development and design, and focusing on AI fact-checkers and general media literacy education. Additionally, further legislation should be enacted to regulate AI use. While the UN’s first resolution on AI is a step in the right direction, laws should be specifically tailored to protect women and marginalized groups.
In conclusion, AI is a powerful tool that can propagate and combat disinformation. For this reason, adequate steps must be taken to ensure it is not used as a weapon against marginalized groups. By implementing safeguards and promoting ethical use, AI can be harnessed for positive impact while protecting vulnerable groups from misuse.