For years, enterprise security was designed around control. Systems were built to defend a perimeter, policies were written to enforce it, and compliance frameworks became the benchmark for whether organisations were doing enough.
With data moving constantly between platforms and workforces no longer tied to a single location, the perimeter has stretched beyond traditional boundaries. Furthermore, AI systems are now beginning to interact with enterprise environments in ways that don’t fit neatly into existing security models.
In that context, the assumptions that shaped cybersecurity architectures over the past decade are being reworked. “The security mentality has shifted from building a castle and hunkering in, to dealing with distributed systems in the cloud and elements not usually under the control of on-premise personnel,” says Mohammed Ashoor, Country Manager for Bahrain, Accelera Digital Group. “This mentality has to shift further toward always assuming risk in the cloud and dealing with data as it flows and is in transition.”
As infrastructure spreads across environments, the traditional markers of trust begin to weaken, pushing identity into a far more central role in how security is applied.

“Traditionally, enterprises anchored trust on network location—the IP address, or where someone was accessing systems from,” Ashoor says. “But with mobile devices, cloud platforms and SaaS, location no longer matters; what matters is who is accessing the data.”
That shift would be significant on its own, but it becomes more complex as non-human actors enter the environment. AI systems are no longer confined to analytics or automation in the background. They are starting to initiate actions, access information, and interact with enterprise platforms alongside employees and partners. In practice, that means security teams are no longer just managing users; they are managing a growing mix of identities, some of which are autonomous.
“The next evolution is gaining clear visibility into exactly who, or what, has access to what,” explains Ashoor.
In practical terms, that means treating AI systems less like tools and more like participants in the environment. They require identities, defined permissions, and clear boundaries around what they can and cannot do.
“AI agents and autonomous systems should be treated as full digital actors,” he adds. “Just as human users have identities, permissions and governance frameworks, AI actors need the same – if not stricter – structured controls, especially as their autonomy increases and human oversight decreases.”
This introduces risks that existing security controls were not designed to handle, particularly where AI systems can be influenced or manipulated.
“Agentic identity management is about treating AI agents as first-class identities, governed with the same rigour as human users,” Ashoor says. “But the risks are significantly higher because these agents can be hijacked, manipulated or redirected through techniques like prompt injection.”
That changes the scope of identity governance. It is no longer just about provisioning access and enforcing policies. It is about maintaining confidence that every actor in the system, whether human or machine, is behaving as expected in real time.
This is where the gap between compliance and resilience becomes harder to ignore. Compliance still plays a role by providing structure and accountability, but it is built around defined controls and periodic validation. It does not reflect how systems behave when data is moving continuously across cloud environments, SaaS platforms, and distributed networks.
“Compliance provides guardrails, but it does not account for data that is constantly shifting across cloud, SaaS and distributed networks,” explains Ashoor. “CISOs need a real-time, operational view of security to ensure resilience as systems and data continuously change.”
That shift is already visible in how access is managed. Long-standing privileges are being replaced with time-bound access, granted when needed and removed when they are not, reducing exposure without slowing down the business.
While it is a small change on paper, in practice it reflects a different way of thinking about security, assuming conditions are always changing.
Preparing for the next wave of risk
Security leaders are also being pushed to think further ahead, as some risks are not immediate but are inevitable.
One risk that is increasingly part of security conversations is quantum computing, particularly its potential to break current encryption standards once the technology becomes mainstream.
“Quantum computing is not an immediate threat, but once it becomes one, it will be too late to react,” Ashoor says. “The speed at which quantum systems could decrypt harvested data means attackers collecting sensitive information today will be able to exploit it the moment quantum capabilities mature.”
The concern is not just about what can be accessed now, but what is being stored and could be exposed later. That is pushing organisations to assess where encryption is vulnerable and how they will transition to post-quantum standards over time.
“That is why enterprises need to take post-quantum cryptography seriously now, understanding their exposure, mapping where vulnerable encryption is used and planning migration paths,” says Ashoor.
Planning for this does not sit neatly within traditional security roadmaps. It requires anticipating scenarios that have not fully materialised, while still managing day-to-day risk. “This is not something to address when the threat arrives; preparation has to start immediately,” he adds.
As AI becomes more embedded in business processes and data moves more freely across environments, how organisations structure security is coming under greater strain. “Organisations stuck in old models of access and location-based security will be far more exposed,” Ashoor says, pointing to the combined impact of agentic systems and future decryption capabilities.
Organisations are better positioned when they move towards identity-led security, with stronger visibility and control that can adapt as conditions change. “They need to safeguard themselves against long-term threats, rather than reacting after the damage is done,” he says.
That shift also changes how security is measured in practice. It needs to be less about whether controls exist, and more about whether they hold up under pressure.
According to Ashoor, Zero Trust becomes more relevant here, moving beyond implicit trust based on network location and anchoring security in identity.
“If I am accessing data from five or six different devices, the system must continuously verify that it is still me throughout the entire session,” Ashoor says. “In the agentic era, this becomes even harder as AI agents must be authenticated, monitored and governed with strict guardrails to ensure they have not been taken over or altered during their access.”
Once AI systems are part of that equation, the challenge becomes harder to contain. Identities are no longer static, and neither is behaviour. The focus moves beyond defence to keeping systems aligned, access controlled, and decisions moving as conditions change.
For CISOs, that changes the mandate to ensuring they can continue to operate securely without interruption, even as conditions shift.
“As these agents become more autonomous, maintaining identity integrity across both humans and machines becomes a top priority for resilience,” says Ashoor.






Discussion about this post