CyberSecurity Magazine Newsletter: August 2021


Dear Reader,


In this month's newsletter, we zoom in on a foundational technology that is changing cyber security as we know it. Artificial Intelligence (AI) and Machine Learning (ML) are already widely used to recommend products, videos, and ads to you online. In the context of security, AI/ML promises more reliable detection, faster response times, and the ability to "intelligently" automate security controls. We take a look at where the technology is today, what new use cases are currently being explored, and the security risks it carries.



AI's Role in Cyber Security

AI/ML has the potential of being immensely beneficial to cyber security. Already today, it is used in various contexts, for example, to detect anomalies or to categorize security-relevant events, essentially automating tasks that traditionally involved lots of manual work. But beyond that, due to the fact these systems are capable of continuously analyzing vast amounts data, they will likely be able to identify correlations that would evade manual scrutiny.



Advanced Practical Applications

Over the coming years, it's fair to assume the spectrum of different use case will only grow further. One possible application that is more involved and will likely be relevant to large-scale communication service providers has recently been featured on Cybersecurity Magazine: Detection of DDoS attacks in IoT networks.



Novel Attack Vectors

AI/ML applications critically depend on data sets to learn from. This data is fundamental to training the system in the application's context and enabling it to take "educated" decisions in future scenarios. Hence, it is also a critical attack vector that may be targeted by malicious actors.

But there are additional threats other than data poisoning as well. Researchers have been able to prove the concept of embedding and delivering malicious software itself inside of the AI model, which goes to show there still remains a lot to be understood about the innerworkings of AI/ML-enabled applications.



Responsible Innovation

Because the technology does carry certain risks, NIST has recently initiated efforts towards creating an AI risk management framework. While not a mandatory standard –which is challenging in the context of AI/ML– the goal is to develop a voluntary document that can support "developers, users and evaluators improve the trustworthiness of AI systems."



Not Only Useful to White-Hats


Naturally, the powers of Artificial Intelligence are not reserved to responsible parties only. It can be just as useful to cyber criminals for automating and advancing their operations – be it the identification of system vulnerabilities or the creation of convincing spear-phishing emails (yes, those are the ones customized to specific victims). As usual in cyber security, it's an arms race between attackers and defenders.



Cybersecurity Magazine Editorial Team



For our latest video discussions and podcasts please see the River Publishers YouTube.

The latest journal articles from River Publishers in all areas of cyber security can be found on the River Publishers website.