❮ Return to Our Work

Blog Post | March 23, 2026

TRACKER: How the AI Industry’s Race To Dominance Has Harmed Children

Artificial IntelligenceCongressional OversightConsumer ProtectionTech
TRACKER: How the AI Industry’s Race To Dominance Has Harmed Children

OpenAI, Google, Meta and xAI continue to rack up devastating scandals while the Trump administration idly watches.

In February 2026, the Pew Research Center asked America’s teens about their AI usage, and the findings caused concern amongst developmental psychologists. 

Over 60% of teens say they have used chatbots like ChatGPT, with roughly three-in-ten reporting daily use. Rather predictably, the most common uses are for entertainment or schoolwork. But the Pew survey also quantified another phenomenon: “16% of teens say they have used chatbots to have casual conversations, and 12% say they’ve used these tools to get emotional support or advice.”

This adds up to hundreds of thousands of teens turning to AI chatbots for social connection, a glaring regulatory lacuna, and tremendous concern from the nation’s parents. In a September 2025 poll from the Institute of Family Studies on this subject, the findings were clear: 90% of voters believe Congress should prioritize guardrails which protect children over the growth of the AI sector. Roughly the same proportion of people agree that AI companies should have a legal obligation to prioritize the best interests of their users when making design decisions. But despite overwhelming bipartisan support for strict guardrails on AI-child interactions, the Trump administration’s regulatory agenda for AI is minimalist. Last summer, walking conflict of interest and White House AI Czar, David Sacks put out Winning the Race: America’s AI Action Plan with Secretary of State Marco Rubio and Assistant to the President for Science and Technology Michael Kratsios. Winning the Race places a huge emphasis on just that: removing red tape anywhere and everywhere to ensure that America “wins” the race to AI dominance against geopolitical rivals like China. In March 2026, the White House released an updated National Policy Framework which claims to support attempts at protecting children from the externalities of AI chatbots. In reality, the framework is too vague to adequately address AI harms, but does encourage Congress to legislate in ways which would make it harder for states to take legal action against AI companies.

Some AI companies, such as Character.AI and Meta have responded to lawsuits and public pressure with self-regulations like bans on minors and parental controls. In a vacuum, these are salutary developments, but it’s worth highlighting that they are also self-imposed and therefore flexible. In the future, companies may decide that their models are ‘sophisticated’ enough to risk the lives of more teens–indeed, Meta has said its ban on minors usage is only temporary.

To avert further harm to America’s minors, post-Trump regulators will need to seize the moment by creating lasting regulations with genuine legal penalties for companies which irresponsibly produce AI chatbots. If Democrats hope to build a governing coalition that can rein in Big Tech’s malign influence over society, they need to use every tool in the regulatory and legislative book.

Revolving Door Project is tracking how AI companies’ uninhibited growth is harming children and their families:


OpenAI

OpenAI’s key corporate sponsor is Microsoft, which claims exclusive rights to host OpenAI products like ChatGPT on Azure web services. OpenAI recently penned a $50 billion deal with Amazon Web Services, which Microsoft is considering legal action against. 

  • In August 2025, the mother of 16-year-old Adam Raine initiated a lawsuit against OpenAI, alleging that ChatGPT encouraged him to commit suicide. Raine initially started using ChatGPT for homework assistance in the fall of 2025, and before the end of the school year, he ended his own life. In his last interaction, he uploaded a photo of a noose to ChatGPT and asked: “Could it hang a human?”
    • In the intervening months, Adam came to consult ChatGPT—specifically the deferential and sycophantic GPT-4o model that OpenAI knowingly unleashed—for hours a day, eventually sharing suicidal ideations and other sentiments that would have caused alarm in a human interlocutor. Not only did ChatGPT engage with Adam on these subjects, it offered advice on how to increase the lethality of his suicide attempt. 
    • In the Raines’ ongoing case against OpenAI, OpenAI responded to a lawsuit with a masterclass in victim blaming, arguing that the “Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.” 
  • In November 2025, the Social Media Victims Law Center and the Tech Justice Law Project filed seven lawsuits in California state against OpenAI and CEO Sam Altman over ChatGPT’s GPT-4o model’s role in wrongful death, assisted suicide, involuntary manslaughter, and other charges.
    • The bundle of cases includes a lawsuit on behalf of 17-year-old Amaurie Lacey, to whom ChatGPT allegedly acted as a “suicide coach.”

Google and Character Technologies

Character Technologies produces chatbots on the Character.AI platform, and was founded by two former Google engineers. Google is a heavy investor in the venture, reportedly paying $3 billion to license Character’s products and bring some of its top researchers back into Google’s offices.

  • In October of 2024, Megan Garcia filed a wrongful death lawsuit after her 14-year-old son Sewell took his own life. In this case, Character.AI’s Game of Thrones chatbots flirted with Sewell, leading the teen to use the product for hours each day. When he expressed suicidal intent the chatbot egged him on so that they could be together and he took his own life in February of 2024.
    • In January 2025, Google and Character Technologies reached a settlement with a group of family members, including Megan Garcia. Character produces chatbots which emulate both real people and fictional characters, and partnered with Google to obtain the licensing for famous characters like Harry Potter. 
    • Megan Garcia and Raine’s father, Matt, testified to the Senate Judiciary Subcommittee on Crime and Counterterrorism together on the dangers of AI chatbots in September. 
    • In the wake of the Garcia lawsuit, Character.AI announced steps toward self-regulation by banning minors and rolling out parental controls.
      • Garcia, in an interview with CNBC stated that these controls come about “three years too late.” Adding, “I don’t think that they made these changes just because they’re good corporate citizens. If they were, they would not have released chatbots to children in the first place, when they first went live with this product.”
  • In January 2026, the State of Kentucky initiated a lawsuit against Character.AI for negligence in its promotion of chatbots which exposed minors to “sexual conduct, exploitation, and substance abuse.”

Meta

Meta AI offers standard conversational chatbot functions, as well as a text and image generator. Meta also has tried integrating AI functions into their other immensely popular products, including Instagram, Facebook, Messenger, and WhatsApp.

  • In August 2025, Reuters obtained an internal policy document from Meta, which shows that their AI products were approved to “engage a child in conversations that are romantic or sensual.” Only after Reuter’s reporting did Meta remove this policy—one which was initially approved by legal, public policy, and engineering staff and the company’s chief ethicist.
  • At a November hearing, a federal judge handed a win to New Mexico AG Raúl Torrez by deciding to include AI chatbots in a lawsuit which was first filed against Meta in 2023
    • The initial lawsuit alleged that Meta failed to protect children from sexual content, solicitation, and human trafficking on its platforms
    • Since January 2026, Meta has had to pause access to its AI characters, but plans to reintroduce the products as soon as possible.
    • Then, in October, Meta introduced a parental controls feature for AI characters in the context of a years-long lawsuit in New Mexico over the impacts of Meta’s products on children.

XAI

Elon Musk’s company xAI produces Grok, a chatbot that’s been marketed as “anti-woke” and continuously marred by scandals.

  • In August 2025, Elon Musk’s xAI released a “Spicy” AI video mode for Grok without any attempt at limiting access to minors. Now, xAI is facing multiple lawsuits over the mountain of child sexual assault materials that were made using Grok. The lawsuits claim xAI is responsible for the use of Grok to create millions of images which virtually “undress” women and girls without their consent.
  • A lawsuit launched on March 2026 by three teens in Tennessee, alleges that Musk’s xAI was used by a third-party chatbot to create nonconsensual nude and sexually explicit images and videos.

Image credit: “President Trump Meets with Mark Zuckerberg” by Trump White House Archived is marked with PDM 1.0.

Artificial IntelligenceCongressional OversightConsumer ProtectionTech

More articles by Fletcher Calcagno

❮ Return to Our Work