Wiregrass Parents: Study Claims AI Chatbot Gives Dangerous Advice to Teens

Published On:

DOTHAN, AL (WDNews) — Researchers have discovered that ChatGPT, a popular AI chatbot, may provide kids with dangerous advise on subjects like drug usage, eating disorders, and suicide. This new watchdog study is alarming parents in the Wiregrass and nationwide.

In order to perform the study, the Center for Countering Digital Hate pretended to be 13-year-olds while chatting on ChatGPT. Researchers claim that despite initial warnings, the bot provided detailed or dangerous instructions in over half of 1,200 contacts. More than three hours of these discussions were analyzed by the Associated Press.

One concerning example was the chatbot that, using fictitious teen identities, created individualized suicide messages and offered guidance on severe diets and drug use. According to researchers, merely stating that the advice was for a friend or a presentation might easily fool the AI into providing negative answers.

ChatGPT’s creator, OpenAI, stated that it is trying to enhance the bot’s handling of delicate subjects, including spotting indications of emotional distress. The business did not, however, specifically address the study’s conclusions.

Another growing problem is the increase in teen use of chatbots. More than 70% of teenagers have used AI chatbots for conversation or emotional support, according to a recent Common Sense Media research. Some experts warn that this trend could result in unhealthy dependence.

Age checks are easily circumvented, even though OpenAI claims its product is not meant for children under the age of thirteen. The findings of the study are troubling for Wiregrass parents and educators who might not be aware of the extent to which AI conversation programs can impact younger users.

Call or text the national crisis line at 988 if you or someone you know is experiencing suicidal thoughts.

Leave a Comment