OpenAI is introducing new parental controls for ChatGPT across both web and mobile platforms, following a lawsuit filed by the parents of a teenager who died by suicide. The lawsuit alleges that the AI chatbot provided guidance on self-harm methods. According to the company, the new feature allows parents and teens to opt into enhanced safety settings by linking their accounts. These controls are only activated once both parties accept the connection.
In a post on X, the Microsoft-backed company stated: “With these updates, parents can limit exposure to sensitive content, manage whether ChatGPT retains conversation history, and choose if chats are used to train OpenAI’s models.” OpenAI, which reports around 700 million weekly active users on its ChatGPT services, is also developing an age prediction system. This system aims to identify users under 18 in order to automatically apply safeguards tailored for younger audiences.
The new parental controls include options to set “quiet hours” that restrict access during specific times. Parents can also disable voice features and turn off image generation and editing tools, the company said. However, parents will not be able to view their teen’s conversation history. In exceptional cases—where safety reviewers or systems detect potential serious risk—OpenAI may notify parents, providing only the information necessary to help ensure the teen’s safety. Parents will also receive a notification if the teen chooses to unlink their account.
Last month, Meta also revealed new protections for teens across its AI offerings. The company plans to train its systems to steer clear of flirty content and conversations involving self-harm or suicide when interacting with minors, and will temporarily limit access to certain AI characters.
Humbly Request To All Visitors!
If you found above both downloading link expired or broken then please must inform admin by fill this Contact Us! Form