ChatGPT Introduces Elevated Risk Labels Amid Rising Cyber Fraud Concerns
OpenAI has introduced new security features that alert users before their private data is exposed to hackers. These tools act as a digital shield, specifically designed to prevent data leaks.
OpenAI has added two new security features to ChatGPT, i.e., Lockdown Mode and Elevated Risk Labels. The aim of these tools is to prevent the stealing of user information by AI. In an era where the use of AI in digital payments, Aadhaar-based services, and online banking is growing exponentially, it is considered an imperative step. This is because, in recent times, cases of cyber fraud and information leaks have been growing continuously.
However, a new cyber threat known as prompt injection has been identified. In this case, a hacker embeds commands in a document. If a user asks the AI to read the document, it ends up revealing sensitive information. For instance, if you are requesting the AI, such as ChatGPT, to read content from a suspicious website and later request it to retrieve data from your system, it ends up revealing sensitive information.
This feature will warn the user in advance that the feature or web-connected tool being used could expose more data. For example, if ChatGPT is connecting to an external third-party website or app, it will clearly indicate the potential risks involved. This allows the user to decide whether to proceed with the chat with the AI.