- ChatGPT’s new ‘Trusted Contact’ feature will help in serious conversations.
- The user’s trusted contacts will receive information about the threat.
- While protecting privacy, AI will provide an alternative to human help.
- This feature is not a substitute for professional assistance, but a supplement.
ChatGPT New Feature: There was a lot of discussion about the artificial intelligence chatbot ChatGPT for some time, especially when people alleged that AI was not able to handle sensitive conversations like suicide and self-harm properly. Now after this controversy, OpenAI has introduced a new security feature for ChatGPT which has been named Trusted Contact. This feature has been specially designed to help those people who are going through situations like mental stress, depression or emotional distress.
What is Trusted Contact feature?
Trusted Contact is an optional security feature that can be used by users above 18 years of age. In this, the user can add any trusted person like a family member, close friend or caregiver. If ChatGPT detects during a conversation that the user is struggling with serious thoughts of self-harm or suicide, this feature can send an alert to that trusted person. According to OpenAI, its purpose is to prevent people from feeling alone in times of crisis and to inspire them to connect with a real person.
How will this system work?
The Trusted Contact feature works in several steps. First of all, the user can go to the settings of ChatGPT and add a trusted person. But the feature will be active only when that person accepts the invitation. If the AI system detects signs of serious danger in the conversation, then ChatGPT itself will first prompt the user to talk to his trusted contact.
For this, some conversation starters can also be shown so that the user can easily start the conversation. After this a specially trained human review team will investigate the situation. If they find the risk serious, an alert can be sent to the Trusted Contact via email, message or app notification.
Will your chats remain private?
OpenAI has clarified that the alerts sent to Trusted Contact will not include complete details of your private chat or conversation. The notification will simply state that the user may have had a worrying conversation related to self-harm and they will be advised to contact that person. That is, the company claims that this feature has been designed keeping in mind the privacy of the user.
AI will not replace professional help
The company has also clarified that the Trusted Contact feature is not a substitute for mental health experts or emergency services. As before, ChatGPT will continue to provide helpline numbers, crisis support and advice on seeking professional assistance if needed. OpenAI says that this feature has been developed with the advice of mental health experts, doctors, suicide prevention organizations and the American Psychological Association.
Along with technology, human support is also necessary
Today people have started sharing their personal problems with AI chatbots. In such a situation, features like Trusted Contact show that tech companies are now becoming more serious about mental health security. However, experts believe that AI can definitely help, but in difficult times, no machine can completely replace real human companionship and professional assistance.
Also read:
Is AI weakening your thinking power? Shocking revelation in new study
