Hackers are using ChatGPT to create malware, company confirmed
OpenAI: A few days ago, a report came in that it was said that hackers are using ChatGPT to create malware. At that time, OpenAI did not say anything, but now the company has confirmed that yes, hackers are using ChatGPT to write the code of malware.
In a very short period of time, ChatGPT has become very popular after its launch and now many other AI tools are launched continuously. Experts have been warning people about AI tools from the start that it is destructive and on a large scale can be misused. Now we all know that ChatGPT is being misused.
A few days ago, a report came in, saying hackers use ChatGPT to create malware. At that time, OpenAI didn't comment on anything, but now it confirmed that yes, hackers are using ChatGPT to write malware code.
Cybercriminals now seem to have begun misusing OpenAI's model ChatGPT increasingly to generate malware, spread misinformation, and execute other dangerous activities like spear-phishing.
A new report has shed light on how OpenAI disrupted more than 20 fraudulent operations around the world since the beginning of 2024. In turn, that has raised concerns about a growing misuse of AI by bad activities in creating and fixing malware, preparing content for fake social media personalities, and even crafting influential phishing messages.
OpenAI says its mission is to make sure that its tools are put toward accruing benefit to humanity. It has said it's focused both on the identification and prevention and on disrupting efforts that would prevent its models from being used for harmful ends.