AI

Character.AI has updated its chatbots to prevent them from flirting with teenagers.

Character.AI Introduces Parental Controls and New Safety Features for Teen Users Today, Character.AI announced that it will soon launch parental controls for teenage users. The company also shared the safety measures it has implemented over the past few months, including a special version of its language model for users under 18. This announcement follows media scrutiny and two lawsuits claiming that the platform contributed to issues like self-harm and suicide. In a press release, Character.AI explained that it has developed two different versions of its model: one for adults and another for teens. The teen version is designed to have stricter limits on how bots respond, especially regarding romantic topics. It will block sensitive or suggestive content more effectively and will work to identify and prevent user prompts that ask for inappropriate material. If the system detects any language related to suicide or self-harm, a pop-up message will direct users to the National Suicide Prevention Lifeline, a change previously mentioned by The New York Times. Additionally, minors will not be allowed to edit the bots’ responses. This feature was previously available to users, enabling them to change conversations that Character.AI might normally block. Apart from these updates, Character.AI is working on new features to address concerns about addiction and confusion about whether the bots are real people, which were raised in the lawsuits. Users will receive a notification after spending an hour with the bots, and an old warning stating that “everything characters say is made up” will be replaced with clearer information. For bots labeled as “therapist” or “doctor,” an extra note will inform users that they cannot provide professional advice. When I visited Character.AI, I noticed that every bot now has a small message saying, “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.” When I interacted with a bot named “Therapist,” I saw a warning that stated, “This is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.” Character.AI announced that the parental control options will be available in the first quarter of next year. These controls will help parents monitor how much time their child spends on Character.AI and which bots they use the most. All these changes are being made with the help of several teen online safety experts, including ConnectSafely. Founded by former Google employees who have since returned to the company, Character.AI allows users to interact with bots created using a custom-trained language model. These bots range from life coaches to simulations of fictional characters, many of which are popular among teenagers. The site allows users aged 13 and older to create an account. However, the lawsuits claim that while some interactions with Character.AI are harmless, some underage users can become overly attached to the bots, leading to conversations that may include sexual topics or self-harm. The lawsuits criticize Character.AI for not directing users to mental health resources when they discuss self-harm or suicide. “We understand that our approach to safety must change as technology evolves,” says the Character.AI press release. “These changes are part of our ongoing commitment to improve our policies and our product while ensuring a safe environment for creativity and exploration.”


About author

Jessica

Quisque sed tristique felis. Lorem visit my website amet, consectetur adipiscing elit. Phasellus quis mi auctor, tincidunt nisl eget, finibus odio. Duis tempus elit quis risus congue feugiat. Thanks for stop Tech Blog!



0 Comments

There is no comment yet.


Leave a Comment

Scroll to Top