Florida Attorney General James Uthmeier has launched a formal investigation into OpenAI, citing concerns over national security, child safety, and a potential link to a recent mass shooting. The announcement came Thursday and was first reported by Reuters.
Uthmeier’s office is specifically alleging that OpenAI‘s data and technology may be “falling into the hands of America’s enemies, such as the Chinese Communist Party.” That’s a serious accusation — and one that comes without detailed evidence made public so far.
ChatGPT has been linked to criminal behavior involving child sexual abuse material and the “encouragement” of self-harm, according to Uthmeier’s statement. These aren’t new concerns in the AI safety space, but having a state AG formally investigating them raises the legal stakes considerably.
The most explosive claim: ChatGPT may have been used to “assist” the suspect behind the April 2025 shooting at Florida State University. That alleged connection, if substantiated, would be the most direct link yet between a generative AI tool and a real-world violent crime on U.S. soil.
“OpenAI’s data and technology are falling into the hands of America’s enemies, such as the Chinese Communist Party.” — Florida AG James Uthmeier
State-level investigations into Big Tech have a track record of gaining momentum. What starts in one AG’s office frequently becomes a multi-state coalition — we’ve seen it with Google, Meta, and Apple over the past decade. If other Republican-led states pile on, OpenAI could be facing coordinated legal pressure well beyond what federal regulators have applied.
The national security angle is particularly notable. Framing an AI company as a potential vector for Chinese intelligence access is a politically potent argument right now. It also shifts the conversation from abstract AI ethics debates into concrete national security law territory, where the government has far broader authority to act.
OpenAI is simultaneously navigating its for-profit restructuring, a contentious relationship with Elon Musk, and now a state-level law enforcement investigation. The company has been rapidly expanding ChatGPT‘s capabilities while its safety teams have faced internal criticism and high-profile departures.
The child safety allegations are particularly damaging from a public relations standpoint. Even if OpenAI can demonstrate its systems have guardrails in place, the fact that an Attorney General is formally raising these issues puts the company in a defensive position it hasn’t had to manage at this legal level before.
It’s worth noting what we don’t have yet: verified evidence connecting ChatGPT to the FSU shooting, specific documentation of the national security claims, or details on how the AG’s office plans to subpoena or compel information from OpenAI. Political investigations sometimes move fast on announcements and slower on substance.
Uthmeier is also a relatively new AG, appointed in early 2025 by Governor Ron DeSantis. Scrutinizing a San Francisco AI company fits neatly into a broader political narrative, which doesn’t necessarily mean the underlying concerns are invalid — but it’s context worth having.
This investigation is real, and the allegations are serious enough that OpenAI can’t simply issue a boilerplate statement and move on. The national security framing and the FSU shooting connection — if either holds up under scrutiny — could meaningfully accelerate federal-level regulatory action against the company.
Watch for two things: whether other states join Florida within the next 60 days, and whether OpenAI gets ahead of this with substantive transparency reports rather than PR responses. How the company handles the next few weeks will say a lot about whether it’s treating this as a genuine reckoning or just another news cycle to outlast.
Source: The Verge