OpenAI Sparks Privacy Debate: Human Reviewers Can Escalate Chats to Law Enforcement
OpenAI, ChatGPT Privacy Issues: If you think that your conversation on ChatGPT is completely private, then this is not true. OpenAI has admitted that in dangerous situations it can check your chats and can send it to the police if needed.
OpenAI made a significant announcement on its blog this week. The company admitted that user chats on ChatGPT are monitored, and if a conversation involves violence or a plan to harm someone, those chats are sent to the Special Review Team. According to the company, if this team believes the threat is serious and immediate, the information may be shared with law enforcement agencies.
This disclosure raises many questions because, until now, people believed that conversations with AI were private and safe. Critics argue that if humans interpret the intent and tone of a conversation, ChatGPT's claim to be fully autonomous is challenged.
Another major concern is how OpenAI identifies users' locations so emergency services can be notified. Experts warn this could also be misused. For example, in cases of "swatting," someone might impersonate an innocent person and send false violent messages, leading police to raid that person's home.