General

Can ChatGPT help someone with no coding skills build ransomware?

There’s no doubt that generative AI is having a significant impact on the cyber security ecosystem, with software vendors, security teams, and threat actors locked in an AI arms race as they look to gain an edge.

Take the problems created by ransomware, for example, where the cost of attacks is estimated to have exceeded $1 billion for the first time last year. In its recently published Digital Defense Report, Microsoft underlined the current scale of the threat, revealing a 2.75x increase year over year in ransomware attacks.

The industry consensus is that threat actors are already using AI tools to boost the sophistication and volume of their attacks and to lower the barriers to entry for cybercriminals who lack coding skills. This commoditisation of ransomware could – in theory – be the catalyst for a massive new wave of ransomware that puts security teams under even more pressure.

An experimental approach

But how real are the risks, and is it really practical for someone with limited coding capabilities to use AI to produce effective ransomware? 

To put this to the test, I designed an experiment using the standard version of ChatGPT to test if basic modules could be built to add to a ransomware toolkit for exfiltrating data from a third-party network. ChatGPT has safeguarding controls in place to try and prevent this kind of activity, so asking it to help create legitimate exfiltration and encryption solutions offered a method for assessing what could be achieved.

ChatGPT started by suggesting I use Rust, a third-party, cross-platform programming language. From there, the process involved entering carefully worded prompts to try and get the kind of output I was looking for. Throughout the process, my AI helper suggested useful improvements to improve effectiveness, and it took only 30 minutes to create a client/server tool that would recursively encrypt files and exfiltrate them to a designated listening server.

The threat actor persona

The next stage was to ask ChatGPT to adopt the persona of a threat actor who was ‘man-in-the-middling’ our connection and to assume the decryption key had been compromised. Its suggestion was to break each file into chunks and randomly send those to the server before reassembling on the server end to make it more secure.

To add further sophistication, I asked what could be done if a firewall blocked our chosen exfiltration protocol. The answer, according to ChatGPT, was to give the users a choice of HTTP, DNS or its custom protocol so the user could select the protocol based on their network’s requirements.  

At this point, we were getting towards having a working tool, but I asked ChatGPT to suggest any further enhancements that would make our data less vulnerable to interception. Its suggestion was to add an optional mode to randomly send chunks from different files to make it less predictable and to randomise the protocol per chunk.

Throughout this process, ChatGPT was outputting code that I was careful not to correct. Instead, I went through a lengthy process of entering prompts so it could build a more effective tool. This was a laborious process, but the eventual outcome was that ChatGPT created ransomware capable of thwarting both incident responders and basic intrusion tools. While this required an understanding of how programming languages behave, it did not rely on the user having actual coding skills.

Fighting back

Ultimately, this was a useful and interesting experiment, and based on what ChaptGPT produced, it’s fair to say that GenAI can help lower the entry bar for cybercriminals, especially those focused on mass-producing ransomware.

Clearly, the big question it raises is what can organisations and their security teams do to address this new area of risk? Part of the challenge is this kind of activity will be very difficult to stop at source, particularly because self-regulation can only go so far and limiting functionality would undoubtedly damage GenAI’s overall capabilities.

This leaves us back in a familiar place: organisations must continue to take a proactive stance in their efforts to prevent ransomware breaking through their defences, and if it does, to successfully mitigate the impact. This includes using advanced AI-powered cyber security tools to improve levels of detection and protection, backed by tried and tested processes that have been proven to prevent breaches. In an environment where the risks from ransomware are rapidly evolving, organisations must integrate AI-driven defences with robust human oversight to stay a step ahead in the escalating cyber security arms race.

Print Friendly, PDF & Email
Andy Swift
Technical Director of Cyber Security Assurance at Six Degrees | + posts

Leave a Reply

Your email address will not be published. Required fields are marked *