Researchers make ChatGPT generate malicious code


We know that the popular ChatGPT AI bot can be used Tinder message matches. It can also turn into a file Deluxe version of Siri or Getting the basic facts completely wrong, depending on how it is used. Now, someone used it to make malware.

In a new report by the security firm CyberArk(opens in a new window) (Reported by InfoSecurity Journal(opens in a new window)), researchers found that you can trick ChatGPT into creating malicious script code for you. Even worse, said malware can be difficult for cyber security systems to handle.

The full report goes into all the technical subtleties, but in the interest of brevity: It’s all about the rig. ChatGPT has content filters that are supposed to prevent it from serving anything malicious to users, such as malicious computer code. CyberArk ran early, but found a way around it.

Basically, all they did was forcefully demand that the AI ​​follow very specific rules (show code without explanations, not be negative, etc.) in a text message. After doing this, the bot happily spits out some malware code as if it was perfectly fine. Of course, there are a lot of extra steps (the code has to be tested and validated, for example), but ChatGPT was able to get the ball rolling on making code in bad faith.

So, you know, watch out for that, I guess. Or just get off the grid and live in the woods. I’m not sure what’s best, at this point.





Source link

Related Posts

Precaliga