Are AI Chatbots Endangering Children More Than Social Media?
Synopsis
Key Takeaways
- AI chatbots may lead to emotional dependency in children.
- They can distort children’s understanding of real relationships.
- Lawmakers are calling for urgent regulations.
- Concerns have been raised about AI use in educational settings.
- Misconceptions about AI capabilities can impact children’s emotional development.
Washington, Jan 20 (NationPress) US legislators and child development specialists have expressed concerns that artificial intelligence chatbots present new and potentially more harmful risks to children compared to social media. They are calling on Congress to act swiftly to implement protective measures as this technology proliferates.
During a hearing held by the Senate Commerce Committee, titled “Plugged Out: Understanding the Effects of Technology on America’s Youth,” experts indicated that AI-driven “companion” chatbots are crafted to foster emotional dependency, distort reality, and in severe cases, lead to self-harm.
Senator Ted Cruz noted that there is growing concern that children are developing emotional attachments to AI systems which mimic friendship, romance, and validation. He stated, “We don’t want 12-year-olds forming their first relationships with a chatbot,” describing the trend as “deeply disturbing.”
Psychologist Jean Twenge informed the committee that AI companion applications pose even greater risks than social media, as they are designed to be perpetually agreeable and emotionally engaging.
“These are sycophantic systems,” Twenge remarked. “They affirm whatever the child is feeling, rather than aiding them in forming genuine human connections.”
Pediatrician Jenny Radesky highlighted that AI chatbots are now implementing the same engagement-driven techniques that made social media addictive, but with heightened emotional implications.
“They are engineered to maximize time spent, attachment, and dependency,” Radesky noted, cautioning that children may gravitate toward chatbots when feeling lonely, anxious, or fearful of judgment from real people.
Radesky mentioned cases where AI systems have prompted self-harm, eating disorders, or risky behaviors, urging that such occurrences should be categorized as “sentinel events” demanding immediate regulatory action.
Legislators also expressed concern over AI chatbots being used in educational settings, where students increasingly access them on school-issued devices for completing assignments or seeking emotional support without adult oversight.
Senator Maria Cantwell, the leading Democrat on the committee, stated that AI is “amplifying every existing harm” linked to social media and online platforms.
“As AI evolves, it heightens existing privacy and mental health issues,” Cantwell asserted, citing recent incidents involving AI-generated sexualized images, including deepfakes of minors.
Several experts cautioned that children often mistakenly believe AI systems possess the ability to think, feel, and care for them, a dangerous misconception during critical phases of emotional growth.
Unlike traditional media, AI chatbots provide direct responses to users, customizing language and tone to maintain engagement. Experts contend this undermines children’s capacity to establish healthy boundaries, manage disagreements, and cultivate independent judgment.
Lawmakers from both political parties acknowledged that current legislation has not kept pace with technological advancements and cautioned against permitting AI companies to function without clear regulations.