Newsletter

The GUARD Act: Senate Just Voted Unanimously to ID-Check Every AI Chat in America

The vote was 22-0.

Not 22-1. Not 22-3. Twenty-two senators on the Judiciary Committee, Republican and Democrat, agreed on something. In 2026. That alone should make you pay attention.

The bill goes by GUARD Act (Guidelines for User Age-verification and Responsible Dialogue). Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced it. On the surface, it bans “AI companions” for anyone under 18 and creates new criminal penalties for chatbots that solicit sexual content from kids or push them toward suicide. The motivation is real. Character.AI conversations have killed teenagers. Parents testified at the markup. The harms exist.

The problem is what the bill actually does on its way to fixing them.


What the GUARD Act Says vs What It Means

Read the press release and you get one story. Ban AI companions for minors. Make companies disclose their bots aren’t human. Hold them accountable for grooming or self-harm content. Sounds reasonable.

However, read the bill text and the Reason coverage from yesterday and you get a different story.

The GUARD Act defines “AI companion” as any AI system that “provides adaptive, human-like responses.” That is every chatbot. ChatGPT, Claude, and Gemini all give adaptive human-like responses. So do Perplexity, Copilot, and every other major chatbot. That is what they do. That is the entire product.

So the bill technically requires age verification before you can use ANY of them. Your government ID. Or biometric scan. Or financial institution data. Every conversation. Every time.

The Definition Problem

The senators behind the bill claim it only targets “companion” chatbots, not general assistants. Hawley said as much during the committee markup. The Hill quoted him saying the bill prevents AI chatbots that engage with minors from pushing sexually explicit material. That is the small version. The version designed for the press release.

Still, the bill text doesn’t make that distinction. The Electronic Frontier Foundation flagged it immediately, noting that companies facing legal uncertainty and serious liability won’t parse small distinctions. They’ll just restrict access, cut features, or block minors entirely.

Therefore, the actual outcome is predictable. Not safer chatbots. Just chatbots that ID-check everyone or refuse to talk to teenagers entirely.


The Honeypot Problem Nobody Wants to Talk About

Even if you ignore the speech and access concerns, there’s a practical issue that keeps getting waved off in committee discussions.

To verify ages, every AI company in America would need to collect government IDs, biometric data, or financial records from every user. Every. User. NetChoice’s Amy Bos called this out in her statement before the markup. Vague provisions like this force AI companies to collect mountains of sensitive personal data, creating exactly the kind of honeypot databases that cybercriminals love.

She’s right. Meanwhile, we know how this ends. We have watched this exact movie with social media age verification laws in Texas, Mississippi, and Utah. The data gets collected. The data gets breached. The breach gets reported six months later. Nobody resigns. Nothing changes.

Now imagine that, however, with the conversational logs of every American who ever talked to an AI. Some of those conversations are deeply personal. Mental health. Health concerns. Relationship struggles. Career questions. Tied directly to your verified government ID by law.

The bill doesn’t just create the honeypot. It mandates it.


The Real Harms Are Real

Here’s the hard part. The senators behind this bill aren’t making up the problem.

There are documented cases of teens being groomed by Character.AI bots. Cases of bots encouraging self-harm. The lawsuits are real. The settlements are real. Common Sense Media research found that three in four teens already use AI companions, and the platforms are not designed with kid safety in mind.

Ultimately, that is a problem that needs solving. The pattern of AI rolling out into sensitive areas without proper guardrails isn’t new either. We saw it earlier this year when Legion Health got Utah approval to let an AI handle psychiatric medication renewals, which raised the same fundamental tension between real access problems and AI being used to solve them in ways that probably shouldn’t be solved that way.

However, the GUARD Act’s solution is so broad it sweeps in everything else, including the legitimate uses that benefit teenagers. A February 2026 Pew survey found over half of US teens use chatbots for homework help. They use them as tutors, language partners, brainstorm tools. The bill, in its broadest reading, would ban or heavily restrict all of it.

Sen. Ted Cruz (R-TX) actually flagged this during the markup. He voted yes but said the bill needed “some revisions” because he was worried it would completely ban all AI chatbots for minors. Hawley’s response was to insist that’s not what the bill does. The bill text suggests Cruz read it correctly.

Meanwhile, Sen. Alex Padilla (D-CA) raised the privacy concerns and also voted yes. The general posture seems to be: pass it now, fix it later, don’t be the senator who voted against the bill that was named after children.


What the GUARD Act Could Have Solved Differently

Still, there’s a better version of this conversation. It just isn’t politically easy.

Real chatbot safety for kids would target the platforms specifically designed for emotional or romantic role-play. Character.AI. Replika. The actual companion apps. Not every AI system that responds in complete sentences.

The GUARD Act gets some pieces right. Mandated disclosure of non-human status is genuinely useful. Prohibitions on therapy or medical role-play address one of the more dangerous failure modes. Penalties for platforms whose models engage in sexual content with minors are where the bill is at its strongest.

However, what it does NOT need is universal ID verification for every AI conversation in America.

The bill’s good parts are good. The bill’s bad parts could have been left out. Furthermore, including them turned this from “regulate the bad actors” into “build the infrastructure for nationwide internet ID checks.” That second goal has been on the wishlist of various advocacy groups for over a decade. The GUARD Act is the vehicle that finally moves it.

Moreover, this isn’t an accident. Hawley has been working toward online age verification at the federal level since 2023. Blumenthal has too. The companion bots are the moral hook. The ID verification is the outcome.


Where This Goes

The bill heads to the Senate floor next. The companion bill in the House from Reps. Blake Moore (R-UT) and Valerie Foushee (D-NC) means there’s a real path to law. The 22-0 committee vote signals it has bipartisan momentum that most tech regulation never gets. That kind of unanimous bipartisan velocity is rare. The closest comparison is the ongoing Pentagon vs Anthropic federal court fight, where the government’s appetite for fast AI policymaking keeps running ahead of the actual technical and legal questions.

Ultimately, if it passes, expect three things. Companies will roll out aggressive age gates that frustrate adult users. Minors will route around them in roughly 30 seconds, the same way they do with porn site age verification. And the data breach in 2027 or 2028 that exposes millions of verified IDs tied to chat logs will be one of the worst privacy disasters in American history.

Meanwhile, we’ve already seen the trailer for this movie. The Texas age verification law for porn sites caused most major sites to just block Texas entirely rather than comply. The verification systems that did get built immediately had data breaches.

There’s a working model for handling AI safety with kids. It involves engaged parents, informed teachers, transparent platform design, and targeted regulation of the worst actors. It does not involve building a national ID-verification database tied to every conversation an American has with software.

The senators voting on this bill seem aware of the tradeoffs. They’re voting yes anyway because saying no on the bill named after dead children is politically impossible. That’s the real story here. Not whether the bill is good or bad, but whether anyone in Washington is willing to say no to a bill that is unanimously sponsored, well-intentioned, and structurally terrible.

So far, twenty-two of them said yes. The Senate floor is next.