Queer Eye for AI: Risks and Limitations of Artificial Intelligence for the Sexual and Gendered Community
Credit: Alejandro Ospina
Imagine you call your bank to solve a problem, but the customer service representative suddenly accuses you of fraud and threatens to freeze your bank account if you don’t invite the “real” you. Why? Because their artificial intelligence (AI) voice recognition system concluded that your voice wasn’t “manly” enough to match their recordings. This is a universal experience for transgender people, myself included, and just one of the significant risks AI poses to the LGBTQI+ community.
AI refers to computer applications that demonstrate the capabilities of human intelligence and is rapidly penetrating various aspects of our lives. However, research on the pervasive risks of AI technologies for marginalized communities and ways to mitigate those risks has not kept pace with the pace of their development. While AI can help in some areas, it can encourage human rights abuses and exacerbate discrimination against people of different sexual and gender diversity. Private companies and governments need to address these risks holistically, as soon as possible, with policies based on human rights and community participation.
Deciphering Humanity: The Limits of AI’s Identity Approach
There is a worrying proliferation of systems claiming to be able to identify LGBTQI+ people based on analysis of their facial features, voice, social connections, group memberships, customer behavior and even profile picture filters. However, the software is unable to accurately determine people’s sexual orientation and gender identity because these personal and deeply felt characteristics cannot be discerned from external factors alone, may change over time, and may not conform to Western constructions and correspond to datasets used for training AI.
For example, since the datasets often mix gender identity and sex characteristics, AI inevitably fails on transgender people. The costs of gender identification errors can range from having a profile blocked on a dating app, to misrepresenting transgender people, to having their bank accounts blocked and invasive airport security checks.
In addition, automated gender classification systems can fail intersex people whose sex characteristics do not match society’s expectations of male or female bodies. For example, AI algorithms trained on endosex human datasets, such as B. Menstrual tracking or self-diagnosis apps, AI-based prenatal testing and screening, and AI-powered targeted commercials that promote harmful medical interventions, may provide intersex people and their parents with inappropriate or biased information, and contribute to ill-informed medical decisions and their irreversible consequences.
Additionally, the commercial datasets used to train AI reinforce unrealistic stereotypes about LGBTQI+ people as people who look a certain way, buy certain products, are safe to disclose their sexual orientation online, and want to spend time on social media. Improving technology will not alleviate these problems, as people can take on any appearance and behavior regardless of their sexual orientation or gender identity.
Even with successful identification, AI can still cause damage. For example, AI-driven advertising or public health information can refer a child to a homophobic family on a shared computer, or identify vulnerable adults for conversion practices.
Research also suggests that AI algorithms struggle to differentiate between dangerous and ordinary language in the LGBTQI+ context. Examples of harmful social media censorship include restricting the names of transgender people, censoring drag queens’ “feigned rudeness,” removing innocuous content, banning profiles, and demonizing videos. At the same time, AI algorithms could overlook dangerous content. These flaws result in the traumatizing feeling of having one’s identity erased and cause self-censorship and a chilling effect on LGBTQI+ people’s self-expression, including digital activism.
AI-powered suppression of sexual and gender diversity
When trained on biased data or programmed in a discriminatory manner, AI absorbs, perpetuates and autonomously reproduces prejudices circulating in society. However, the dangers to the LGBTQI+ community are greatest when AI technologies are used with intent to cause harm, such as generating harmful content or targeting the community more efficiently.
The advent of AI can improve law enforcement tactics by homophobic governments to monitor and punish LGBTQI+ people with unprecedented speed and sophistication. Sooner or later, biased governments around the world will be able to use AI to target LGBTQI+ people, activists and allies for law enforcement and slander campaigns by severing their online activities, connections and communities, contacts on mobile phones , analyze online streaming history, hotels and rentals, taxi rides, etc. The Russian government has already rolled out an AI-driven system that aims to identify “illegal” online content in order to enforce the “gay propaganda” law.
In addition, government agencies could restrict freedom of assembly by immediately detecting public event announcements and identifying protesters after those events through facial recognition. Finally, private actors such as social media and publishers can use AI to censor LGBTQI+ content or discriminate against job or insurance applicants.
Protecting the LGBTQI+ community in the age of AI
Traditional legal tactics such as non-discrimination and strategic litigation may be inadequate to address these threats, particularly in countries that do not welcome the LGBTQI+ community. For example, it may be impossible to prove intent in direct discrimination claims due to the opacity of AI-based systems. In addition, traditional human rights mechanisms are based on the responsibility of state actors. Yet much of the harm mentioned above is caused by machines, private actors, or authoritarian governments with no rule of law.
Therefore, a significant part of the responsibility for preventing the unethical use of AI should lie with private companies. They should engage the LGBTQI+ community and organizations in the development and evaluation of artificial intelligence systems, eliminate all technology that attempts to identify gender and sexual orientation, and avoid supporting state-sponsored homophobia around the world.
The ubiquitous ethical guidelines on AI should be replaced by human rights-based guidelines, as these guidelines are an internationally recognized set of universal principles that are not constrained by a single ethical school of thought and can be better enforced through oversight and accountability mechanisms. These guidelines can be strengthened through a binding interpretation of existing international human rights obligations and through the adoption of national laws. One possible legal action is a total ban on AI technologies that claim to detect sexual orientation and gender identity (in any way or for law enforcement purposes), as suggested by politicians and civil society. Finally, different solutions may be required for unintentional and intentional discriminatory use of AI.