A controversial study conducted by a pair of Stanford researchers found the artificial intelligence they trained could fairly accurately determine the sexual orientation of both men and women. Not surprisingly given the topic, the findings unleashed a wave of resentment from behalf of people around the web and especially LGBT groups who criticized the study’s design and the technology’s validity.
Never an easy subject
Equating a person’s physical characteristics with personality traits such as kindness, propensity for crime or even sexuality used to be the trade of a by-gone pseudoscience called physiognomy. The adepts of the field, popular during antiquity until the renaissance when it became less reputable, claimed that having a sharp chin and thin lips are hallmarks of a sly, potentially criminal person, for instance. The field is in the same vein as phrenology, where bumps on the skull are used to diagnose temperament and aptitudes.
Michal Kosinski, one of the lead authors of the controversial study, is by no means a 21st-century physiognomist. He does believe, however, that there are indeed some facial cues that can reveal intrinsic personal characteristics.
“The fact that physiognomists were wrong about many things does not automatically invalidate all of their claims,” Kosinski told New Atlas. “The same studies that prove that people cannot accurately do what physiognomists claimed was possible consistently show that they were, nevertheless, better than chance. Thus, physiognomists’ main claim – that the character is to some extent displayed on one’s face – seems to be correct (while being rather upsetting).”
Kosinski and his colleague Yilun Wang design a machine learning algorithm based on a predictive model called logistic regression. They trained the machine’s artificial neural networks with a dataset of 35,326 facial images. These were profile pics of both men and women extracted from dating sites. All the participants were white.
When the machine had to identify homosexuality from a randomly selected pair of images featuring either a homosexual or heterosexual individual, the AI was on point 81 percent of the time for men and 71 percent for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The machine’s accuracy jumped when the machine five images of the same person at hand to 91 percent for men and 83 percent for women.
LGBTQ advocacy organization GLADD and the Human Rights Campaign (HRC) immediately responded calling “on Stanford University and responsible media outlets to expose dangerous and flawed research that could cause harm to LGBTQ people around the world.”
GLADD and HRC had a call with the Stanford researchers several months prior to the study going public when they raised serious concerns about the validity of the findings and “warned against overinflating the results or the significance of them.” According to a GLADD press release, none of these supposed flaws were addressed.
“Media headlines that claim AI can tell if someone is gay by looking one photo of your face are factually inaccurate,” wrote Drew Anderson, Director of News and Rapid Response for GLADD.
GLADD outlined several flaws in the study’s design like only using profile pictures of whites or only profile pictures from dating sites for that matter. The advocacy group also mentions the study isn’t peer-reviewed yet nor does it make any distinction between sexual orientation, sexual activity or bisexual identity. Harsh words were not spared.
“Technology cannot identify someone’s sexual orientation. What their technology can recognize is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar. Those two findings should not be conflated,” said Jim Halloran, GLAAD’s Chief Digital Officer. “This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of color, transgender people, older individuals, and other LGBTQ people who don’t want to post photos on dating sites.”
Stanford’s study is based on a key assumption of an unvetted theory that explains homosexuality called the prenatal hormone theory (PHT) of sexual orientation. The crux is that under or over exposure of key androgenic hormones responsible for sexual differentiation can drive same-sex orientation in adulthood.
“Typically, [heterosexual] men have larger jaws, shorter noses, and smaller foreheads. Gay men, however, tended to have narrower jaws, longer noses, larger foreheads, and less facial hair,” Kosinski wrote in recently published notes defending his work. “Conversely, lesbians tended to have more masculine faces (larger jaws and smaller foreheads) than heterosexual women.”
“Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles,” the authors concluded in their paper.
The prenatal hormone theory, however, doesn’t capture the complexity of homosexual behavior and, at best, only partly explains why some persons develop same-sex orientation later in life. Indeed, GLADD’s critique of the study’s limitations is well founded. For instance, the study claims it used rather questionable facial identification markers for homosexuality to the order ‘lesbians smile less than heterosexual women’ or ‘lesbians usually have dark hair’.
It’s impressive that the Stanford researchers reached this level of accuracy though the AI doesn’t seem to identify homosexuality at large. Rather, it only seems to partly identify American white homosexuals who fit cookie-cutter cultural norms. That’s still impressive.
Beyond the science itself, the study’s ethics come into question. GLADD asks us to imagine for a second “the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay.” A homosexual witch hunt is actually underway today in a forsaken corner of the Caucasus, the Chechen Republic. Here, the nation’s head of state (or warlord) Ramzan Kadyrov unleashed a brutal campaign aimed at detaining and torturing homosexuals. Kadyrov denounced any such claims simply explaining ‘there are no gay men in Chechnya.
“This is nonsense,” Kadyrov said when asked about the allegations. “We don’t have those kinds of people here. We don’t have any gays. If there are any, take them to Canada.”
“Praise be to god,” the Chechen leader adds. “Take them far from us so we don’t have them at home. To purify our blood, if there are any here, take them.”
Kosinski defended his findings in an explanatory note writing that “we studied existing technologies, already widely used by companies and governments, to see whether they present a risk to the privacy of LGBTQ individuals.”
“[It] really saddens us that the LGBTQ rights groups, HRC and GLAAD, who strived for so many years to protect the rights of the oppressed, are now engaged in a smear campaign,” he added.
Kosinski claims his article is peer-reviewed, despite what HRC and GLADD says, and is expected to appear in the Journal of Personality and Social Psychology. Science Alert informs, however, that the journal’s editor is now re-examining the paper in an ethical review following public pressure.