previous arrow
next arrow
Slider

Is the AI hype bubble in cybersecurity deflating?

 Published: February 3, 2020  Created: January 1, 2020

By Richard Walters 

Artificial intelligence has been touted as the “next big thing in cyber” for some time, even though the concept is as old as the first email viruses. The clamour around the technology, which started in late 2015 / early 2016, was quickly amplified as it became a tool in heavy use with analysts, sales teams and marketers.

AI adoption continues to accelerate, and according to Capgemini’s Reinventing Cybersecurity with Artificial Intelligence report, 48 per cent of respondents said budgets for AI in cybersecurity will increase by an average of 29 per cent in 2020. However, it’s important to note that potentially only a few vendors exist with the R&D budget to pour tens or hundreds of billions of dollars required into building pure AI for cybersecurity.

Typically, instead of AI, what people are usually talking about when it comes to uses in cybersecurity is machine learning and its associated subfields: Supervised, Unsupervised, Reinforcement and Deep Learning. The lines have become blurred around the term AI, and as a consequence, the technology has become open to interpretation about what AI is and what it’s not – and the expectations around what it can deliver.

Like with any technology, AI has limitations. On the surface, in house security teams are sold on something that sounds like AI, in that it can seemingly make independent decisions, but in reality, is actually a highly advanced rules engine. Despite exaggerated claims, no AI tool can predict a Black Swan event; a completely unknown attack.

This misunderstanding means there is a disconnect between expectations and reality. Once organisations get past the shiny marketing and slick sales process, what they are actually left with is often mediocre technology that comes with a significant management burden attached. This can only be a bad thing for the space as a whole. It injects a not insignificant amount of scepticism into the security community, which in turn can only serve to hold back the maturation of an otherwise promising technology. Quite often the only output is that end-users discover the hard lesson that, just because you can do something and have the budget to do so, doesn’t mean you should.

No magic wand

Addressing this situation in advance is the only remedy. Organisations need to first audit their resources; attack surface and security objectives then work backwards to see if AI has a role to play. If it isn’t broken, don’t try and fix it. Once an organisation truly understands what’s causing their cybersecurity headache and the obstacles to meeting their cybersecurity objectives, then it’s possible to establish which technologies will help solve that problem. Thinking anything with AI in the title is going to be a cure to solve all security challenges isn’t realistic. An understanding is needed of precisely what use of AI – or which subfield of AI – is in material use and for what purpose.

To this point, AI is best deployed in situations where a high volume of intelligent attacks are likely to leave a human chasing their tail. It also needs to be set up correctly to ensure it does this. According to our research, 72 per cent of security professionals have admitted considering leaving their role due to insufficient resources. If this is the case, the last thing that security teams need is another piece of technology to manage. Rather than taking away a headache, it creates one. This is a point not to be overlooked. Any technological investment must also be matched by a similar investment in time, resources and manpower to set up and manage AI countermeasures effectively.

It’s not as simple as waving a magic wand and the problems disappearing. Machines won’t just switch on smart and work perfectly from deployment, a certain level of groundwork is required to ‘teach’ the algorithms the parameters of their job.

For this reason, organisations need to build a robust and comprehensive roadmap for implementing AI in cybersecurity. During the development stage of Machine Learning models, security teams will need to constantly evaluate, finetune and optimise.

Need vs want

To do this, security professionals need to put themselves in the shoes of the attacker, constantly anticipating their next move and training their model appropriately. During this stage, it is important to measure bias, variance and error rates to help security teams stay ahead of suspicious behaviour.

A company’s cybersecurity should not be underpinned by ML as a single layer of defence, but it should form part of a multi-layered and comprehensive security stack which combines people, processes and technologies. It’s easy to get swept up in the marketing and sales, but in-depth due diligence in advance helps here.

Whilst we are not disputing the value AI has in security, it’s important for organisations to question whether this is something they actually need, or is it something they think they should have? The last thing a CISO needs is being lumped with a shiny new piece of technology on their hands which has drained their budget and is doing absolutely nothing.


https://www.itproportal.com/features/is-the-ai-hype-bubble-in-cybersecurity-deflating/


No Thoughts on Is the AI hype bubble in cybersecurity deflating?

Leave A Comment