In 2021, the impact of uptime and performance across cloud security solutions has been under increased scrutiny. During that time, some Secure Access Service Edge (SASE) vendors – and by extension, their customers – have been impacted by significant levels of downtime. While one service recently saw over 12 hours of downtime, another suffered from half a dozen outages in a period of just over two weeks.
These kinds of events have potentially serious implications. Not only do they directly expose organisations to increased cybersecurity risks, but they can disrupt the normal flow of operations and even bring business continuity to a halt. For many organisations, that level of exposure is incompatible with their needs and underlines the critical nature of cloud security that delivers on the core principle of continually protecting infrastructure, services, and data.
Typically, these service outages find their origins in the underlying infrastructure upon which a vendor’s products are built. Most SASE providers, for instance, have created and subsequently maintain their own networks of private data centres in order to deliver their solutions. The challenge here is that this approach essentially amounts to an attempt to match the level of service provided by public cloud companies that have dedicated entire businesses to it.
There is also a broad range of cloud security services on the market with varying levels of functionality. Some operate inline for real-time security, while others provide out-of-band for visibility and control, but in each case, the most important buying criterion is the levels of service uptime and performance they can deliver.
In addition, some cloud security services are sold as network services with fixed capacity priced at an annual fee per data usage. Such pricing is suitable for network security services such as firewalls or secure web gateway proxies, while other cloud security services such as email security, Data Loss Prevention (DLP) or Cloud Access Security Broker (CASB), are priced on the basis of an annual fee per user. However, when there is a mismatch between the technology stack and the business model, uptime and performance are compromised.
Legacy Security Architecture
Legacy security products designed for single tenant usage operate at fixed throughput loads, such as 1GB/sec firewall or secure web gateway proxy. When these products are offered as cloud services, vendors simply deploy the legacy devices in a data centre and charge customers on the basis of the throughput. Pricing and architecture are aligned, but if a customer overloads the network, congestion is likely to occur. In this situation, the customer can decide to purchase additional capacity to meet their needs, while other customers remain unaffected.
However, when legacy architecture is used for services such as email security, DLP or CASB, uptime and performance can suffer. These services are licensed on a per user basis, and the customer is paying for performance and uptime levels independent of the time of day, user mobility or usage trends. For example, a customer with 10,000 users expects the same performance and uptime even if half the users gather for a remote offsite meeting. The problem is in practice, this kind of scenario can overload the remote data centre that has a fixed capacity, and in the process bring it down for all users and possibly for all other customers as well.
Focusing On a Modern System Architecture
Ideally, security services that are licensed on a per user basis will benefit from access to a wide range of technology components such as proxy, scanning nodes, database clusters, mail servers, databases and search indexes, among others. Crucially, these services must scan multiple applications and protocols simultaneously to provide effective, agile protection.
In a modern system architecture, each component is stateless, multi-tenant and can handle any type of application. When the load rises in a component, and for example, exceeds 50% during a five-minute interval, the component clones itself. In the previous example, where the user organisation has a large offsite meeting, the remote data centre responds to the increased demand and automatically grows towards the load profile required at the time
Cloud security services such as email security, DLP and CASB also require a broad range of components that will operate globally, at scale and across hundreds of applications. Security services built on legacy security architectures are designed for fixed capacity loads at single tenants and are unable to scale with application usage. Such services suffer long delays in out-of-band mode, and impact business continuity in real-time inline operation.
But, by delivering cloud security services through the public cloud, security service providers can focus on driving innovation across its security technologies rather than managing a fleet of data centres. This also delivers infrastructure where unparalleled uptime supports a modern architecture to adapt in real time to changes in customers’ load profiles, ensuring maximum scalability and performance around the clock and anywhere in the world. Ultimately, the cloud already has virtually infinite redundancy, storage, and compute power, and as a result, true cloud security should be delivered from the cloud itself.
Anurag is the CTO of Bitglass and expedites technology direction and architecture. Anurag was director of engineering in Juniper Networks’ Security Business Unit before co-founding Bitglass. Anurag received a global education, earning an M.S. in computer science from Colorado State University, and a B.S. in computer science from the Motilal Nehru National Institute Of Technology.