Don’t forget to Protect Your Data First

Organizations of all types and sizes are currently racing to adopt new Artificial Intelligence (AI) models, to take advantage of enhanced efficiency, speed and innovation. Unfortunately, many organizations are failing to take basic steps to protect their sensitive data before unleashing new AI technology that will potentially leave it exposed.

Implementing new AI models can raise significant concerns around how data is collected, used and protected. These issues arise across the entire AI lifecycle, from the data used for initial training to ongoing interactions with the model. Up until now, the choices for most companies are between two bad options: either ignore AI and face the consequences in an increasingly competitive marketplace; or implement a Large Language Model (LLM) that could potentially expose data. Both options can result in an enormous amount of damage.

The question is not whether to implement AI, but rather how to derive optimal value from AI without placing sensitive data at risk. New approaches with Privacy-Enhancing Technology (PET) enable continuous encryption of data (even while in use) at a fraction of the cost, without specialized hardware or massive compute resources. This provides a viable method for protecting data while simultaneously offering the ability to get more value out of it through new AI models.

According to The State of AI in 2025 Global Survey from McKinsey, we are already seeing a significant acceleration in the use of AI, with more than 78% of organizations now regularly using generative and analytical AI in at least one business function. Users report cost and efficiency benefits, with 64% saying that it is enabling innovation, especially in IT functions. However, nearly two thirds of respondents say that their organizations have not yet begun fully scaling AI into more business functions.

Much of this reluctance stems from concerns about how data is collected, used and protected, particularly around intellectual property protection. In a recent Metomic survey, 90% of respondents expressed confidence in their organizations’ security measures, but 68% reported data leakage incidents, specifically related to employees sharing sensitive information with AI tools.

Up until now, the choices for most companies are between two bad options: either ignore AI and face the consequences in an increasingly competitive marketplace; or implement a Large Language Model (LLM) that could potentially expose sensitive data. Both options can result in an enormous amount of damage.

The question for CISOs is not whether to implement AI, but rather how to derive optimal value from AI without placing sensitive data at risk.

The Cat Is Out of the Bag

Many public AI models are trained on vast amounts of data, which is often scraped from the internet without the explicit consent or knowledge of the individuals involved. Some of these datasets contain highly sensitive information, such as financial records, health information, and biometric data. As an example, Clearview AI, a facial recognition company based in the US, was fined $33 million by a data protection agency in the Netherlands for scraping billions of images from the web for its facial recognition database.

In many cases, data collected for one specific purpose may be used later to train AI models without permission. Once an AI model is deployed, it can accidentally reveal sensitive information it was trained on, which significantly increases the risk of this data being exposed or misused. In 2024, for instance, Slack faced backlash when it updated its policy to allow user data to be used for AI training by default.

What’s more, generative AI tools may inadvertently share proprietary business data or personal information with third parties or across platforms. Malicious actors can use “prompt injection” attacks to trick an AI system into revealing sensitive data. In one case, a Samsung employee accidentally uploaded proprietary source code into ChatGPT, causing the company to ban its use.

Advancements in Privacy Enhancing Technology

The idea of continuous (homomorphic) encryption has long been dismissed as too expensive and resource-intensive for mass deployment. That’s because legacy homomorphic encryption came at massive computational cost. While the concept may have been elegant in theory, it was a nightmare in reality: slow performance, difficult to implement, and so expensive to deploy that it was out of reach for almost all.

Recently, new approaches have made continuous encryption possible at a fraction of the cost, without specialized hardware or massive compute resources. New PET software represents a shift in mindset, from bigger and more expensive, to smarter and more accessible. It is built to run on standard hardware, efficiently and effectively, solving problems based on real-world business use cases. PET has freed itself of the enormous computational load and the associated latency of legacy homomorphic encryption. The result is greater data security that ensures data remains protected even in the event of a network breach of unauthorized credential use.

Advancements in Privacy-Enhancing Technology (PET) will have an enormous impact on the fields of cybersecurity and data privacy because:

  • It provides a better method of restricted access because it eliminates concerns about insider threats by continuously encrypting data, even while it’s in use.
  • It offers the ability to get more use and value out of existing data by enabling secure search on sensitive datasets, allowing organizations to perform analytics on encrypted data without the traditional trade-offs. Encrypted data becomes just as functional as plain text data.
  • It addresses the three biggest barriers to encryption: 1) Cost, 2) Complexity, and 3) Performance Impact

Finally, organizations that implement PET now will have a distinct advantage in implementing AI models, because their data will be structured and secured correctly and their AI training will be more efficient right from the start, rather than having to continually incur the expense, and risk of retraining their models.

Tony Mitchell
Director of Government Business Development at  |  + posts

Tony Mitchell has nearly three decades of success as a Chief Evangelist and Business Development Executive for a host of technology innovators. As Director of Government Business Development for Donoma Software, he is focused on driving growth and traction for an organization that specializes in Privacy Enhancing Technology, advanced encryption and secure data governance, and on helping organizations intelligently secure, preserve and protect their digital assets and intellectual property, while also making it easy and secure to quickly locate, share and manage the data. Prior to Donoma, Tony spearheaded strategic sales for Perceptyx, Ninth House and SalesKit Software, and served on the Innovation Task Force for the State of Missouri.

Leave a Reply

Your email address will not be published. Required fields are marked *