AI: Helping or Hindering Shadow IT?

To block, or not to block – this question is one that many cybersecurity teams are grappling with as they address the escalating problem of Shadow IT. An ongoing concern for many organisations already, with ¾ of employees admitting to using unauthorised tools, the issue is only escalating as thousands of new AI-enabled applications are becoming readily available at a very low cost, if not free. Whether an employee uses one of these apps, or any other software, device, or service, without the approval or knowledge of an organisation’s IT department, it’s an issue that can very easily get out of hand.

According to recent research, one in five companies believe sensitive corporate data has been exposed to GenAI applications by employees, without authorisation. The likelihood is it’s happening more frequently than reported, with well-meaning employees trying to get jobs done more efficiently. However, without fully understanding the potential risks, it is now arguably reaching a point where these inadvertent breaches of security are more dangerous than malicious insider threats, and maybe always were. Perhaps this is why new research has highlighted that half of respondents’ organisations had restricted AI use to certain job roles, while 16% had banned the technology completely. 

Despite ChatGPT only being unveiled 18 months ago, many organisations are already citing multiple areas of concern, including:

Intellectual Property (IP). As GenAI models are trained on massive datasets, there’s no guarantee of complete data isolation. Feeding in product ideas, development code, or marketing strategies could lead to some of these details leaking into outputs for other users, potentially giving away trade secrets. Effectively, confidential information could show up in results generated for a competitor using the same tool for market research.

Corporate Financial Information. Using GenAI to help compile and create financial reports risks exposing key data such as budgets and pricing strategies. Or even, financial weaknesses if the GenAI was asked to identify problem areas. At best, it might disrupt operations and give information to competitors, but at worst hackers could exploit the information for financial gain.

Customer and Personal Data. Giving GenAI customer or patient information without anonymisation could result in serious privacy violations and non-compliance with data protection regulations like GDPR.

The evolving capabilities of GenAI raise concerns about data permanence. As these tools develop rapidly, the data fed into them today could have lasting consequences. ChatGPT’s emerging memory features, for instance, illustrate the growing volume of user data stored within these services. Businesses that neglect privacy controls or resort to shortcuts with customer or corporate data risk long-term and damaging ramifications.

Freedom versus control

The quandary is how to oversee employees whilst limiting unsanctioned activities, and at the same time giving employees freedom to access the best tools to do their jobs effectively. It requires a careful balance between safeguarding data and reducing business friction from such safeguarding.

While traditional insider risk management tools can help, they often lack comprehensive real-time analysis, detection and response. As a result, damage is quickly done before preventative measures can be taken. However, responsibly developed AI-driven technology can also be an asset, not just a danger. AI is enabling sophisticated behavioural and device analytics. By correlating a range of factors about the user, device, and context to assess whether the activity is suspicious, AI-powered threat detection platforms can pinpoint potential unauthorised data exfiltration in a short matter of time. For example, if the system detects an employee attempting to move sensitive files to an unauthorised external drive or application, it can automatically block the transfer and immediately alert the security team. 

These AI-driven models can detect any behaviours that deviate from an employee’s typical routine, such as unusually large data transfers or attempted access to confidential information outside of normal working hours. But, unlike rule-based systems, AI-powered analytics can dynamically learn and adapt to new patterns of acceptable behaviour. This might occur because of role changes, if an employee moves home, or to accommodate new processes within the business.

Eliminating bias from monitoring

Importantly, incorporating AI within behavioural analytics can help mitigate biases. Legacy security systems often rely on predefined rules or manual monitoring, which can be influenced by subjective decisions, leading to either wrong accusations or missed threats. In comparison, AI can identify subtle indicators of potential insider threats without prejudging individuals according to their job type, location, or previous incidents. This objective analysis helps recognise genuine threats based on what’s happening rather than making assumptions, ensuring that all employees are treated fairly and monitored impartially. This approach minimises blame and ensures that only real threats are flagged, preserving the privacy and trust of employees who are using systems for legitimate purposes.

As the capabilities and accessibility of GenAI tools continue to expand, the risks associated with exporting sensitive data are unlikely to diminish. This escalating threat makes understanding employee activity not just a luxury but an essential component of any robust cybersecurity strategy. However, it is crucial that such monitoring is implemented in a way that is both effective and fair, avoiding the pitfalls of bias and ensuring that employees can perform their duties without undue hindrance.

Last, but not least, AI-driven monitoring solutions can be designed to respect the boundaries of acceptable personal use. By clearly defining and communicating use policies, and by employing AI that can differentiate between innocuous and potentially harmful activities, organisations can maintain security without creating a climate of surveillance that stalls productivity or drains morale.

As the dangers of Shadow IT continue to grow, integrating unbiased, non-intrusive monitoring is essential. This will help to ensure the protection of sensitive data while maintaining a secure, efficient and supportive work environment for everyone.

Print Friendly, PDF & Email
Chris Denbigh-White
CSO at Next DLP | + posts

Leave a Reply

Your email address will not be published. Required fields are marked *