ChatGPT: According to a study paper published by cyber security firm Check Point on Tuesday, OpenAI’s ChatGPT, the large language model (LLM) based artificial intelligence (AI) text generator, can be used to generate code for nefarious operations. Researchers at Check Point used ChatGPT and Codex, another OpenAI natural language to code generator, to turn common English instructions into code that could be used in spear phishing attacks.
One major drawback of AI code generators is that they make it easier for bad hackers to get access to systems by using natural language processing (NLP) technologies. Since the code generators don’t require users to be proficient in coding, anyone may gather the logical flow of information used in a harmful tool from the public web and apply the same logic to build syntax for hostile programs.
The main problem with these AI code generators is that the natural language processing (NLP) tools make it easier for hackers to break in. Users of code generators don’t need coding expertise, therefore anyone may take the information flow utilized in a malicious tool and use it to produce the syntax for their own dangerous tools from publicly available sources on the web.
Check Point’s demonstration of the problem included running the AI code generator through a series of instructions written in plain English that took the resulting phishing email scam from its initial, undeveloped state to a more refined one. In light of what the attackers showed, any user with malicious intent can therefore develop a comprehensive hacking campaign utilizing these tools.
Check Point’s threat intelligence group head, Sergey Shykevich, noted that software like ChatGPT has the “potential to significantly transform the cyber threat landscape.”
“Both ChatGPT and Codex make it easy for hackers to experiment with dangerous code. He also warned that the development of AI technologies was a worrying development since it reflected the rise of cyber capabilities that were both more sophisticated and more effective.
Sure, open source language models can be used to develop cyber defense technologies, but there’s no guarantee that they won’t be misused to produce malicious software, which could be concerning. Although ChatGPT’s terms of service explicitly forbid the development of hacking tools on its platform, Check Point found no legal impediments to this.
The possibility for abuse of an AI-powered language and image rendering service is nothing new. Lensa, an AI-based image editing and modification tool by US-based Prisma, also brought to light how the absence of filtering based on body image and nudity could lead to privacy-nullifying photographs made of an individual, without authorization.