The development of generative AI has resulted in the proliferation of threat actors who have developed new ways to spread their destructive networks. At first, we came across WormGPT, which was a tool that hackers were employing to target people. Now, a source has indicated that a group of cybercriminals is offering yet another dangerous artificial intelligence (AI) tool called as FraudGPT on a number of markets on the dark web as well as on Telegram channels. According to the information provided by the source, the hackers have been charging a subscription fee of $200 per month, $1,000 every six months, and $1,700 annually.
In addition to this, it has been asserted that the FraudGPT is capable of writing harmful code, developing malware that cannot be detected, and locating security flaws in a specific application. According to the source, there have already been over 3,000 confirmed sales and reviews of the malicious generative AI. However, the specific extended language model (LLM) that was utilized in the development of this malicious malware is not known at this time.
In addition to experienced hackers using the new tools to carry out more sophisticated forms of destructive assaults, it may also serve as a launching pad for new actors. They are able to carry out massive phishing assaults and compromise commercial email systems, both of which can result in the loss of valuable data.
It is not a difficult task to reimplement the same technology without such precautions, despite the fact that organizations are able to construct ChatGPT (and other technologies) with ethical safeguards in place. Implementing a defense-in-depth strategy with all of the security telemetry that is readily available for fast analytics has become even more necessary in order to locate these fast-moving threats before a phishing email can turn into ransomware or data exfiltration.