Skip to main contentSkip to navigationSkip to navigation
A gay pride march in Nicaragua. The activist groups say the study could be used to out gay people across the globe, putting them at risk.
A gay pride march in Nicaragua. The activist groups say the study could be used to out gay people across the globe, putting them at risk. Photograph: Jorge Torres/EPA
A gay pride march in Nicaragua. The activist groups say the study could be used to out gay people across the globe, putting them at risk. Photograph: Jorge Torres/EPA

LGBT groups denounce 'dangerous' AI that uses your face to guess sexuality

This article is more than 6 years old

Two prominent LGBT groups have criticized a Stanford study as ‘junk science’, but a professor who co-authored it said he was perplexed by the criticisms

A Stanford University study showing that artificial intelligence (AI) can accurately guess whether people are gay or straight based on their faces has sparked a swift backlash from LGBT rights activists who fear this kind of technology could be used to harm queer people.

The research, which went viral this week, used a sample of online dating photos, limited only to white users, to demonstrate that an algorithm could correctly distinguish between gay and straight men 81% of the time and 74% for women, suggesting machines can potentially have much better “gaydar” than humans.

The Human Rights Campaign (HRC) and Glaad, two of the most prominent LGBTQ organizations in the US, slammed the study on Friday as “dangerous and flawed … junk science” that could be used to out gay people across the globe and put them at risk. The advocates also criticized the study for excluding people of color and bisexual and transgender people and claimed the research made overly broad and inaccurate assumptions about gender and sexuality.

Michal Kosinski, co-author of the study and an assistant professor at Stanford, told the Guardian that he was perplexed by the criticisms, arguing that the machine-learning technology already exists and that a driving force behind the study was to expose potentially dangerous applications of AI and push for privacy safeguards and regulations.

“One of my obligations as a scientist is that if I know something that can potentially protect people from falling prey to such risks, I should publish it,” he said, adding that his critics were encouraging people to ignore the real risks of this technology by trying to discredit his work. “Rejecting the results because you don’t agree with them on an ideological level … you might be harming the very people that you care about.”

The study, first reported in the Economist, has sparked heated debate about the biological origins of sexual orientation and the ethics of facial-detection technology, which is becoming increasingly advanced and prevalent in society.

“Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay,” Ashland Johnson, HRC’s director of public education and research, said in a statement. “Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world – and this case, millions of people’s lives – worse and less safe than before.”

Co-author Michal Kosinski: ‘There is a moral question here. Should we publish it and ... even potentially give some bad guys some ideas, or just not publish it?’ Photograph: Lauren Bamford

Kosinski has not actually released any AI program that the public could use (and declined the Guardian’s request to test the algorithm). He also noted that the findings support LGBT rights by providing further evidence that sexual orientation has biological routes and that being gay is not a choice.

“It’s a great argument against all of those religious groups and other demagogues who say, ‘Why don’t you just change or just conform?’ You can’t stop, because you’re born this way,” he said.

But the LGBT groups argued the study was too narrow by only using photos that people chose to put on dating profiles and by failing to test a diverse pool.

Kosinski and his co-author Yilun Wang acknowledged these limitations in the paper, claiming that they could not find sufficient numbers of non-white gay people. The authors have not disclosed the dating site they used, but Kosinski said in an interview that the majority of profiles they were reviewing were white.

Even though the researchers claimed they couldn’t find enough queer people of color (despite polls suggesting that non-white people are more likely to identify as LGBT than white people), Kosinski said he suspected his algorithm would perform fairly accurately across different races.

Asked about the calls for Stanford to denounce the study, Paul Pfleiderer, senior associate dean for academic affairs at the graduate business school, said in a statement: “Publication of research findings in academic journals allows the interpretation of those findings and the research methodologies used to obtain them to be scrutinized by academics in the field and are appropriately a matter for discussion and debate.”

Kosinski said he strongly weighed the risk of publishing the study at all: “There is a kind of moral question here. Should we publish it and make people upset and even potentially give some bad guys some ideas, or just not publish it and warn people?”

Given that private corporations and governments are already using this type of software, he said he felt obligated to move forward with the paper.

Kosinski added that he would be pleased if another researcher debunked his work: “I hope that someone will go and fail to replicate this study … I would be the happiest person in the world if I was wrong.”

Contact the author: sam.levin@theguardian.com

Most viewed

Most viewed