It seems that AI is inescapable these days. Even if you don’t seek out AI directly, a simple Google search has turned into a (probably incorrect) AI query. Google places the AI summaries at the top of the results page so that the first information you eyes process is AI slop.
Perhaps this isn’t a big issue if you want to find the best taqueria in your area. A half-competent AI could aggregate reviews and point you in a delicious direction. But, what if you’re egg just cracked and you’re scared? You’re not ready to share this with the humans in your life, so you seek help from the Internet. You do some Google searching to answer your most pressing questions, and with each search the first text you see is the AI summary. What does the AI tell you?
Fortunately, a new open-access report from Dr. Anna Beers answered this question for us. Dr. Beers is a researcher at the Center for Information, Technology, and Public Life at the University of North Carolina at Chapel Hill, and her recent analysis was published at Just Tech.
To say the least, Beers results are troubling. A basic Google search for the “long-term effects of transitioning” returned misinformation, claiming that transition causes mood swings, depression, and anxiety. The linked source for this claim is the website of the American College of Pediatricians (ACP), an anti-LGBTQ+ hate group that promotes conversion therapy and opposes the right to marriage and adoption for queer couples.
This claim from the ACP (parroted by Google AI) is fundamentally in opposition to the majority of research on the mental health effects of transition and the lived experiences of trans people. In my own experience, transition provided mental stability which had eluded me since the onset of my adolescent puberty. Certainly, there is an emotional toll of being openly trans, but this toll is actively levied by a transphobic society that marginalizes us for its own amusement.
Unfortunately and unsurprisingly, Beers’ report found that Google’s AI overviews are citing other LGBTQ+ hate groups as well. When prompted for detransition rates, Google AI claimed the exact rates of detransition are unknown, citing the Society for Evidence-Based Gender Medicine (SEGM). SEGM was a driving force behind state bans on affirming care for trans adolescents, including Texas’ decision to investigate such care as child abuse. In reality, detransition rates are known and low (3%).
Moreover, a Google search query for “is my child trans or confused” returned horrible advice for parents of trans children. Citing a blog called “Unleash the Gospel,” the AI overview advised not rushing to label a child as trans as they might just be “gender-confused.” Parents: don’t do this, and seek advice from much better sources.
As Beers suggests, the sanitizing of ACP and SEGM as valid sources on queer health and wellbeing is the direct result of right-wing hate campaigns which cast queer and trans people as threats to society. National governments worldwide have joined the campaign by pushing misinformation about queer health, including the debunked Cass Review (from the UK) and the anonymous HHS review (from the US).
Since these junk science reports are endorsed by governments, they carry a veneer of validity that non-experts on trans healthcare interpret as credibility. However, these reviews are just misinformation, and good faith examinations of the scientific literature (like Utah’s recently released report) routinely find that affirming care has vast benefits.
AI inherently lacks an value system to effectively scrutinize misinformation. As Jessica Kant pointed out in a recent Bluesky thread: since the release of the HHS report, OpenAI’s ChatGPT and Microsoft’s Copilot have begun suggesting gender exploratory therapy (a sanitized name for conversion therapy) as a “first line approach” to treat gender dysphoria at any age. To be clear, gender exploratory therapy is only the first line approach to inflicting long-lasting trauma on its subjects.
AI is a supposedly neutral party, a mechanical tabula rasa on which the Internet’s data can be written before regurgitating out alleged intelligence. Yet, this branding is misdirection by the tech oligarchs which run AI platforms. At the end of the day, they alone hold the power to decide what AI algorithms are trained on, and we are all subjected to their whims.
It’s no secret that tech oligarchs have forged an alliance with right-wing, anti-trans governments. As these governments churn out misinformation targeting queer people, tech companies can simply add these documents to their AI training data. After this, AI algorithms will largely treat this information as fact.
Laundering misinformation is really that simple once you have access to the people responsible for the algorithms. This problem is likely to get worse as governments continue to peddle queerphobia. Of course, this is by design because sovereignties (especially the United States) view AI as a tool of surveillance and enforcement to cast society into a mold of their own making. In case it wasn’t obvious, there is no room for gender non-conformity in that mold.
When enforcing this worldview, AI has many options at its disposal. It could, for example, simply rat out non-conformers to the model’s creators or directly to the government. But, it could also try to persuade you to take actions that it is programmed to promote. In fact according to a recent paper published in Nature Human Behavior, a “personalized” AI chatbot trained on someone’s sociodemographic data is more persuasive than another human.

Fortunately, we haven’t reached the point where this AI persuasion approach is feasible at scale. AI companies would need to vastly expand their server capabilities to generate personalized AI models for each individual. As such, we can still avoid these worst-case scenarios.
The biggest remaining hurdle for tech companies is money. So far, AI has largely failed as a commercial product, and the data centers needed to power AI are expensive. We can collectively deny these companies the access to the revenue they need to scale up by boycotting AI products: change your default search engine to something that isn’t Google and avoid OpenAI’s promised product line (including the proposed AI neckwear that doesn’t turn off), and don’t pay to access a creepy little algorithm.
Simultaneously, we should put explicit pressure on tech companies to create civilian oversight of AI. These boards should be filled with an international cohort of people disproportionately impacted by AI’s biases such as queer and trans people, disabled people, people of color, and those living in the global South.
If AI is to fundamentally transform human society for the better (and for the record, I do not yet believe that), then it belongs under collective governance rather than oligarchic rule. Otherwise, AI will become just another tool of capitalist repression.
I want to address this one part:
> Moreover, a Google search query for “is my child trans or confused” returned horrible advice for parents of trans children. Citing a blog called “Unleash the Gospel,” the AI overview advised not rushing to label a child as trans as they might just be “gender-confused.” Parents: don’t do this, and seek advice from much better sources.
For the longest time, I thought I was "gender-confused" and was sure I was a cishet male. For my pre-teen and teenage years, I was *extremely* queerphobic. The advances of gay rights in the 2000's and 2010's made me a ally to queer people (EDIT: or so I thought), but it was watching the Barbie movie in August 2023 that finally cracked my egg and on May 12 of the following year, I started HRT. Best decision I ever made in my life (and I made some good ones, such as going to NYU and not pay tuition for it)!
My point to all this? "Gender-confused" kids may very well be eggs waiting to crack at any moment. For me, it just took 41 years of my life.