Is the cyber security sector unprepared for AI?

Generative AI has taken the world by storm but there’s still some confusion over its applicability in a cyber security context and the extent to which it might be taken up and used for harm by threat actors. But it also turns out that perceptions vary across the security workforce, with AI perceived differently depending on job role. Those tasked with active defence, for instance, appear to be less fazed than the C-suite. 

Overall, only 46% of cybersecurity professionals believe they have grasped the impact of AI on cyber security, according to our survey of 205 IT security decision makers conducted in August 2023. Almost a third stated outright they did not understand the technology. Information Security Analysts were the most confident, followed by systems analysts and the CTO. Worryingly, CIOs appear to have the least understanding among the different job roles questioned, with 42% admitting as much.  

Leadership is lacking

Such trepidation at a senior level may affect how proactive the business when it comes to addressing AI. It could affect how urgently it is translated into actionable security policies or the prioritisation of user education or the assessment and investment in AI-enabled tools. For these reasons, it’s imperative that the sector familiarise itself with the technology to identifywhere and how it can be used effectively, whether there is a need for upskilling with regard toprompting, and how potential risks can be contained. 

The unknowns associated with AI are undoubtedly playing on minds, with 61% of those questioned in the survey apprehensive over the increase in AI use but again CIOs trailed herein comparison, with 55% saying it was cause for concern. The majority (60%) also believed that AI is already increasing the number of cyber security attacks organisations are being subjected to.

Despite worrying over the meteoric rise of the technology, 73% agreed AI is becoming an increasingly important tool for security operations and incident response. The consensus was that incident response would become faster and more accurate as a result, presumably because AI can analyse data at scale, identify threats in real-time and propose possible courses of action in response to findings. 

Recognising the potential

More than two-thirds (67%) also believed that using AI improves the efficiency of cyber security operations as AI can automate routine tasks, allowing cyber security professionals to focus on more complex and strategic aspects of their work. This is a fair deduction given that machine learning tools are already widely used today to identify potential threats and respond quickly to any attacks. For example, many security companies are using machine learning algorithms to identify patterns in network traffic and flag any suspicious activity. 

These tools are particularly effective when they are combined with the expertise of trained security professionals, who can quickly investigate any potential threats and take appropriate action. In fact, AI is likely to prove beneficial in a threat hunting context as its becoming increasingly good at learning what’s normal for specific environments and this is making it more challenging for attackers who now need to tailor malware to meet rules within individual environments in order to evade detection. 

Yet it’s also worth pointing out here that a quarter of those questioned were most definitely on the fence about AI’s contribution to security, as they neither agreed nor disagreed that it improves the efficiency of cyber security operations. This reveals that many continue to believe that irrespective of the tools used, there is no substitute for experienced personnel, adept at interpreting the results and choosing the best course of action.

The risks of ungoverned AI

What is clear is that the sector can’t afford to rest on its laurels. Unsanctioned generative AI is already being used by employees, presenting considerable risk to the organisation. Another survey back in February found 68% of those questioned were using AI at work without telling their boss. Moreover, the research report ‘Revealing the True GenAI Data Exposure Risk’ found 15% of employees paste data into ChatGPT, with 6% of admitting to doing so with sensitive company data and 4% doing so on a weekly basis. Source code, internal business information, and personally identifiable information (PII) were all being uploaded, with the worst offenders being R&D, Sales & Marketing, and Finance – data that is then at heightened risk of exposure and is presumably actively being used by the technology to inform its replies to other users.

Equally, we don’t yet know if there is the potential for Generative AI commercial solutions to expose sensitive data if incorrectly configured. Microsoft Copilot, which is being integrated into Microsoft 365, is a perfect example. It’s a tool that will have access to all of the data that your users have access to, so if your access controls are not correct and you haven’t got things properly labelled, the AI isn’t going to know that, and potentially use sensitive information to produce results from a query that you’ve asked. Similarly, policies will need to be put in place to check for bias and spot hallucinations to govern its use from an ethical perspective and ensure accuracy.

Another area of concern is the ability of threat actors to utilise AI for deepfakes and ultra-realistic phishing and social engineering attack campaigns at scale. No sentiment is evident from the survey results that organisations are cognisant of the need to up their game on the employee awareness front to mitigate such risks.

Just how much of an impact AI will have remains to be seen but the current sense of bewilderment at senior levels needs to be addressed. Unless the sector learns how to control the technology and its adoption across the business, it could well run away from us and expose sensitive data at scale. A failure to get up to speed and deploy AI-enabled tooling will likewise see valuable time lost because threat actors will not be slow to exploit its potential.Such scenarios make it imperative that the sector grasps the nettle today.

Print Friendly, PDF & Email
Brian Martin
Director of Product Management at Integrity360 | + posts

Leave a Reply

Your email address will not be published. Required fields are marked *