Cybersecurity Magazine: This actually brings me to some of the questions regarding your areas of work in the future. One example being “quantum safe cryptography”. Do you think that is something that will be done by manufacturers in the future, given the fact that now virtually all systems in the markets which feature cryptology will have a very difficult time changing the cryptographic standards and their product?
Alex Leadbeater: Well, since you covered most of this in the interview with Mark, I’d like to concentrate on critical national infrastructure devices. For those devices, quantum cryptology is something people have to think about because it is difficult to retrofit. I spend quite a lot of time in the security group of 3GPP (SA3), and both 3GPP and the TC CYBER in ETSI will tell you that you should never have one cryptographic algorithm, and you should always have the ability to change them out. For now, crypto agility is key rather than having quantum safe algorithms.
There are some very challenging aspects to this, for example, how do you do things like long term key updates? How do you evolve algorithms? How do you remove some of them from the market? Trying to get rid of algorithms is ridiculously difficult, because there is always one device left. And again, there is the question of IoT: if you have a smart home, and you have a device that’s running a legacy algorithm today, in five years’ time you might not be able to simply replace that piece of infrastructure.
How do you manage an environment where some of the devices are more secure than others, Some of the expensive big devices are clearly going to be migratable. If your device is a $2 sensor buried in a roadway with a 10-year battery, the cost of designing it to be updatable may be prohibitive. If you look up some of the industrial IoT type scenarios (e.g. nuclear) where technically you could design the product to be updatable, you have actually got no means of doing so.
Changing that is something we strive to do in the standards, but it is actually quite difficult in practice.
Cybersecurity Magazine: One of the areas mentioned is virtualization and cloud. How do you see that particular area of virtualization in specific and also cloud in the future?
Alex Leadbeater: There is a difference between virtualization, cloud native and what is currently being offered. Yes, the networks that have been deployed are using commercial off the shelf servers, and they are using cloud technology and cloud management but at the end of the day, they are still racks of legacy style COT equipment.
There is relatively no mobility between the servers, and you are not standing up and tearing down VNFs or Containers multiple times a minute. While from the outside it looks like you are using the power of the cloud technologies, you haven’t got all of the mobility, the ability to spin up small micro services, quickly tear them down, move them from London to Birmingham in fractions of a second. All of that is currently missing because the technology (e.g. AI algorithms) required to make this work is still a number of years away, and the compute power required to do that doesn’t exist either.
You can therefore currently mitigate most of the security risks, for example, by putting hardware firewalls around certain points, or restricting the way in which certain servers are used, or by having some extra hardware in some of those servers that provide trusted compute services. You can dance around the underlying virtualization security problems going forward. But once you go to full cloud native NFV type environments where you are spinning up micro services, and you have got full virtualization rather than virtualization lite, then there are some security challenges that need to be addressed, and for a lot of those currently security challenges, the industry doesn’t seem to be making rapid progress in addressing them.
A good example is hardware enclaves. As an example CPU vendors started releasing hard enclave capable CPUs (e.g. Intel SGX), something like four or five years ago, it’s not a brand-new technology. It has some issues but it’s a lot better than nothing and deployment restrictions can largely mitigate those issues. However, these enclave technologies are in general not currently kernel native in Linux. Since it’s not natively built into the kernel, this means you have to do a lot of work with the Linux kernel and tweaking the OS or NFV virtualization platforms. The trouble is that next time you get a Linux software patch or similar upgrade, it may break your implementation and then you have to start from scratch again. There are some fundamental building blocks that have not been built in by default. Now that in part is because they are difficult to use, but there are still some sort of fundamental gaps in the standards space. I think it is fair to say that the security standards are not quite where they need to be in order to be able to fully leverage the advantages of NFV and cloud native.
Cybersecurity Magazine: A more philosophical question to finish this interview off: how do you expect cybersecurity to evolve in the future?
Alex Leadbeater: That really is the million-dollar question. We have touched on a couple of these things. One is we are going to have to get used to a world of security by design. The days of retrofitting cybersecurity by putting firewalls around the edges of networks to defend things are coming to an end. The complexity of network function service logic is changing, which means that the security has to be built into the core of products and services not bolted around the edges, and the other thing that I think is changing is that cybersecurity skills are becoming something that everybody who is working on cloud technologies and pretty much anything, is going to need in their skill set. There will still be cybersecurity experts, but the general public is going to need some degree of cybersecurity knowledge, at a basic level. Which is quite a shift.
The other thing that is going to make a change is AI. ETSI has the only standards group currently specifically looking at underlying AI security. AI with some of the traditional approaches, we have used to secure things by securing around the edges are going to change.
The threats that exist around AI products are not terribly well understood. If you attack or poison an AI algorithm, what is the effect on a rather larger set of products or services? If you poison or affect an individual user, their ability to damage things is relatively contained. If somebody hacks an AI algorithm which has access to everything, nothing is potentially more catastrophic.
The question is how does this play in to cybersecurity, both using AI as a defense against cyber security attacks, but also where AI is as the attack vector. That’s probably the biggest change we will see over the next years, as the effect of AI automation comes in. The way in which we defend networks and some products and services will have to adapt to an AI centric model, which currently we don’t have. We do have little bits of AI here and there, and obviously we have examples of AI losing the plot or otherwise, in other words going off and doing something unexpected. We are going to get more of those unfortunate chat bot headlines.
The final answer for you: we are going to get more connected devices in the home, we’re going to get larger numbers of IoT devices, and all your devices in your home and other things may well end up with a degree of connectivity they didn’t have before. The threat vectors and the attacks that are likely to occur will probably be something we can’t think of today so there’s going to be some new and exciting, “we didn’t see that coming” moments. When attackers find three or four IoT device hop chains to exploit security, we will end up thinking about how do we standardize against that? How do we design against that? How can we use gateway devices to protect the slightly more “stupidly” designed devices. Those are the sort of aspects for me that I think will shape the cybersecurity future.