In recent times, one AI tool has managed to challenge the way millions of us think and work, becoming the fastest growing consumer application in history. This tool is called ChatGPT. ChatGPT is an AI-powered tool that can interact with users in a conversational way to answer questions and perform tasks. It has a plethora of use cases, including drafting full essays, translating text, and even writing and debugging code.
However, the rise of ChatGPT has also raised concerns among cyber security experts. They worry about how malicious actors can abuse it to craft convincing and targeted phishing emails, write malicious code, and spread misinformation.
While it is possible for cyber criminals to abuse ChatGPT, cyber security experts can also take advantage of its natural language processing capabilities in their security investigations. In a recent security investigation, we used ChatGPT to communicate with potential scammers.
To do this, we created specific system prompts for our ChatGPT-powered chat bots. We provided specific information for the chat bots’ profiles, including name, age, job, and location. We also instructed ChatGPT not to admit that it was a chat bot or a chat assistant when asked.
Once we were confident with our chat bot’s custom system prompt, we created multiple chat bot profiles so that we could simultaneously converse with multiple potential scammers. This allowed us to gather vital information that propelled our research forward, such as links to fake brokerage sites and access to scammers’ crypto wallets.
There are many benefits to using generative AI tools like ChatGPT in cyber security investigations. Security experts can capitalize on the AI assistance’s ability to craft personalized chat messages, saving them time and resources. With the proper system prompts, ChatGPT can assist security researchers in obtaining pertinent information that can be used by authorities to help victims.
However, users should be wary of leaving ChatGPT to its own devices without proper prompts or instructions. It can be difficult to lure scammers into giving critical information without guidance.
In the years to come, it’s highly likely that generative AI tools like ChatGPT will be intrinsic to various industries. While these advanced technologies can be abused for ill gain, they can also, and more importantly, be used to do good.