How AI can Build Cybersecurity Resilience
Anoop Das, Cybersecurity Expert at Mimecast, says AI is a powerful broom for cleaning up cluttered security environments.
The stunning growth of the global cybercrime industry is putting strain on organisations and their security teams. Mimecast’s latest State of Email Security Report 2021 found that 45% of organisations in the UAE reported an increase in the sophistication of incoming cyberattacks, while 41% cited a growing volume of attacks.
That most organisations are also dealing with the challenge of securing a hybrid workforce, as many staff members continue to work remotely part time, only adds to the challenge. The increasing reliance on email and other business productivity tools is creating new challenges. In fact, three-quarters (75%) of organisations expect an email-borne attack will damage their business this year.
Security teams in turn look to deploy new tools and solutions to protect vulnerable users and systems, and this is warranted. The increase in brand impersonation attacks, for example, has made solutions such as DMARC and brand exploit protect tools invaluable to efforts to protect customers from compromise.
An (over)abundance of security tools
However, the growing number of security tools is also leading to cluttered security environments that can be hard to manage. One study puts the average number of security tools at any given enterprise at 45 – others believe it’s closer to 75.
What’s interesting is that having more security tools does not necessarily equate to a better security posture. An IBM study found that enterprises using 50 or more security tools ranked themselves 8% lower in their threat detection capabilities and 7% lower in their defence capabilities than their less cluttered peers.
The ongoing cybersecurity skills shortage presents a major challenge when it comes to managing security environments. Some analysts estimate that there is a global shortage of three million cybersecurity professionals, at a time when cyber threats have drastically increased in both volume and sophistication. Without the right personnel to manage the technology and ensure that everything is always properly enabled, having dozens of security solutions can cause more trouble than good. Which is why having too many security tools doesn’t necessarily translate into better protection.
This may explain the growing adoption of artificial intelligence within security teams. The AI market for cybersecurity is expected to grow from $8.8-billion in 2019 to more than $38-billion by 2026, as the adoption of IoT and connected devices and a growing volume of cyberattacks put pressure on internal teams.
Decluttering security environments with AI
For most security professionals, security intelligence is still very much carbon based, not silicon based. In other words, for most security professionals, it’s people and not tech that generate the highest-quality, actionable intelligence into security.
However, the volume of threats, the growing number of security tools, the broad range of threat vectors and the impact of the pandemic – specifically the sudden rise in remote work – have put immense pressure on security teams.
The use of AI makes sense, especially where the organisation’s risk profile, security solutions or skills require augmentation.
What do organisations need to bear in mind when determining what role AI could play in supporting security teams?
For one, AI is of little use when it is not integrated to the organisation’s broader security ecosystem. Security teams should be able to integrate the findings of the AI tool into their other security tools to provide a unified and automated view of current and emerging threats.
Having the AI tool assume some of the complexities of human behaviour also increases its usefulness. For example, machine learning is often effective in detecting highly-directed attacks that may be difficult for traditional rule-based systems to detect. The sheer volume of data that most organisations have to manage also makes it near-impossible for security teams to remain effective without the assistance of algorithms.
For example, the new CyberGraph email security tool uses AI to detect sophisticated phishing and impersonation attacks, identifying anomalies and applying machine learning technology to create an identity graph based on relationships and connections between email senders. This provides security teams with an automated tool that alerts employees in real-time about email-borne threats.
Setting clear expectations for what return-on-investment you seek for an AI deployment also makes positive outcomes more likely. Implementing an AI tool may require time and resources, which need to be factored in upfront.
Although it is no silver bullet, AI can be a powerful tool in helping organisations build greater resilience, and can lend welcome support to under-pressure security teams. However, it is essential that security leaders understand the role and limits of AI upfront, lest it becomes yet another solution cluttering up the security environment.