×

‘Age-appropriate standards’

Senators propose bill to protect teens from AI

AP Photo Bruce Perry, 17, demonstrates the possibilities of artificial intelligence by creating an AI companion on Character.AI,, July 15, 2025, in Russellville, Ark

Bipartisan legislation has been proposed in the state Senate aimed at protecting children from the potential harms of artificial intelligence.

The state Senate Majority Policy Committee recently hosted a public hearing led by Sen. Tracy Pennycuick, R-Red Hill. Several of those testifying praised Senate Bill 1050, sponsored by Pennycuick and fellow Republican senators Scott Martin and Sen. Lisa Baker to require mandated reporters to report all instances of child sexual abuse material, including AI-generated images, when they become aware of them. The bill is also supported by Democratic co-sponsors Amanda Cappelletti, Art Haywood, Nick Miller and Jay Costa.

“Sadly, today our children are being targeted in new ways that weren’t even possible just a few years ago. AI-generated CSAM, deepfake pornography, AI chatbots creating harmful content and online predators using technology to manipulate and exploit – this is the new reality,” said Pennycuick. “It’s imperative that we teach children safe internet practices so they can decipher what’s real and what’s fake. Today’s public hearing helped us identify what new guardrails are needed to ensure our kids are safe online.”

Senate Bill 1050 would require operators to issue a clear and conspicuous notification indicating that the AI companion is artificially generated and not human.

The state lawmakers are also attempting to create suicide and self-harm safeguards. Operators of artificial intelligence bots would be required to maintain and implement a protocol, to the extent technologically feasible, to prevent an AI companion on its platform from producing suicidal ideation, suicide or self-harm content to a user, or content that directly encourages the user to commit acts of violence. The protocol shall include providing a notification to the user referring the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide or self-harm.

Tracy Pennycuick, R-Red Hill, is pictured during a Senate Majority Policy Committee hearing recently.

In the case of users that an operator should know or suspect is a minor, operators of artificial intelligence sites would be required to disclose to the user they are interacting with artificial intelligence and not an actual human being; provide by default a clear and conspicuous notification to the user at least once every three hours during continuing interactions that reminds the user to take a break and that the AI companion is artificially generated and not human; and institute reasonable measures to prevent its AI companion from producing visual material of sexually explicit conduct or directly instructing the minor to engage in sexually explicit conduct.

“In the near future, we will introduce legislation to establish commonsense safeguards for AI chatbots accessible by minors in Pennsylvania,” Pennycuick and Miller wrote in their co-sponsorship memorandum. “As these tools become more common in classrooms, on smartphones and across social platforms, our laws must keep pace to prevent avoidable tragedies. Recent heartbreaking stories have come to light of vulnerable individuals, including minors, who have used AI chatbots to cope with trauma, mental health, depression, and anxiety. Unfortunately, some of the responses they received have contributed to reported incidents of self-harm or even suicide.”

The state legislation comes after a Common Sense Media survey found 31% of teens said their conversations with AI companions were “as satisfying or more satisfying” than talking with real friends. Even though half of teens said they distrust AI’s advice, 33% had discussed serious or important issues with AI instead of real people. The nonprofit analyzed several popular AI companions in a ” risk assessment,” finding ineffective age restrictions and that the platforms can produce sexual material, give dangerous advice and offer harmful content. The group recommends that minors not use AI companions.

Researchers and educators worry about the cognitive costs for youth who rely heavily on AI, especially in their creativity, critical thinking and social skills. The potential dangers of children forming relationships with chatbots gained national attention last year when a 14-year-old Florida boy died by suicide after developing an emotional attachment to a Character. AI chatbot.

“These incidents, while not the norm, highlight the very real risk when vulnerable users receive unsafe, unvetted responses from persuasive AI systems,” Pennycuick and Miller wrote. “Experts have also been raising alarms. A recent risk assessment warns that AI ‘companion’ bots can exacerbate mental health problems for kids, including risks related to self-harm. Clinical commentators have likewise flagged the dangers posed by unrestricted use of chatbots, where the tool itself can worsen a user’s condition, and called for stronger guardrails. Our bill takes a focused approach centered on child safety and prevention. The bill will establish clear, age-appropriate standards for chatbots that minors interact with; require robust safeguards to prevent content generation that encourages self-harm, suicide or violence against others; and directs users to appropriate self-harm crisis resources whenever high-risk language is detected.”

Starting at $3.50/week.

Subscribe Today