Last time we prominently featured AI/ML in our newsletter was in this year’s February edition. Six months on, the hype as not died down noticeably. In fact, there is reason to believe that it’s been blown out of proportions so much that, in some instances, it may be cause for more confusion than anything else. Therefore, in this newsletter, we take a fresh look at Artificial Intelligence as well as its relation to security.
The hype cycle is out of control
Part of the confusion about AI stems from imprecise terminology. With everything and your fridge coming equipped with AI these days, it can be hard to distinguish genuinely “smart” solutions from elaborate if-else trees. The Harvard Business Review recently featured a short read that brings us up to speed.
Not the right tool for every job
With the rise of generative AI, such as ChatGPT, many have written about the broad range of use cases that could be magically automated now, also in the field of security. Turns out, if you use the wrong tool for a job results may not be stellar, and writing malware is not one of generative AIs strong suits, according to security researchers.
Biased from the start
Not only are people trying to use AI for offensive security purposes, defensive security controls are also adopting it. Again, it is very likely not the silver bullet some may be expecting. As recently featured on Cybersecurity Magazine, a biased or otherwise flawed data set will hamper the accuracy of AI-enabled tools, highlighting the fact that they should not be trusted blindly.
Not thoroughly understood
To better understand the vulnerabilities of AI, specifically LLMs, and how they can be abused this year’s DEFCON featured what they called the “AI Village”. The setup featured eight different models that hackers could experiment with. Unfortunately, the article doesn’t go into much detail as to what kind of flaws were found, but the hope is that necessary improvements make it into future iterations.
Beware of AI-generated content
In the meantime, generative AI is already cause for real-world safety risks in a form that probably no AI sceptic had predicted: Cookbooks. As you may know, book retailers have been flooded in recent months with content from generative AIs. Unfortunately, this also extends to mushroom identification and cooking guides, which one would hope to correctly identify (and avoid) the poisonous sort.
Cybersecurity Magazine Editorial Team
For our latest video discussion on the security and production systems please see the River Publishers YouTube.
The latest journal articles from River Publishers in all areas of cyber security can be found on the River Publishers website.