AI is settling into the fabric of enterprise systems in a way that leaves little room for separation between experimentation and production. What began as isolated use of generative tools has moved into environments where models sit alongside applications, data pipelines, and operational workflows. That shift is particularly visible in cybersecurity, where the pace of change is shaped as much by adversaries as it is by internal ambition. The conversation has moved beyond adoption. It now centres on how AI is secured, governed, and kept under control as it becomes part of the infrastructure organisations rely on.
Kalle Björn, Senior Director, Systems Engineering, Middle East at Fortinet, describes this transition as one that changes the way AI is positioned inside the enterprise. “What began as experimentation with public generative AI (GenAI) tools has evolved into something foundational,” he says. “Many enterprises have begun building private large language model (LLM) environments, integrating AI into core applications, and deploying agentic systems that retrieve data, interact with APIs, and initiate workflows across business systems. As a result, AI is no longer peripheral. Instead, it is becoming part of the infrastructure.”

“The boundary between IT risk and business risk has collapsed, accelerated by AI’s deep integration into operations, decision-making, and customer engagement”
Kalle Björn, Senior Director, Systems Engineering, Middle East, Fortinet
Once AI moves into that position, the idea of securing it as a separate stack begins to fall apart. The dependencies run too deep across identity, networks, and data flows to treat it as an isolated layer. Attempts to replicate existing security models around AI often miss how these systems behave once they are embedded into production environments.
“The most common mistake is treating AI as a standalone application stack with separate controls. In reality, AI workloads depend on and influence identity systems, network policies, data governance, API enforcement, and operational workflows. Securing AI effectively requires embedding governance, traffic inspection, and policy enforcement at every control point across the architecture,” explains Björn.
This integration is already further along in cybersecurity than in most other domains. AI has moved into the mechanics of detection and response, shaping how threats are identified and contained in real time rather than analysed after the fact.
“Machine learning and GenAI capabilities enable inline inspection, automated threat hunting, rapid response, and secure GenAI adoption, which is delivering proactive protection for AI and with AI,” he says. “Fortinet has a track record of more than 15 years of AI innovation, delivering AI-driven security to stop advanced threats while ensuring AI systems remain protected and trustworthy with more than 500 AI patents issued and pending.”
At the same time, the infrastructure supporting this activity is beginning to show strain. As AI expands across organisations, it drives more traffic, more interactions, and a broader set of entry points. Existing approaches to networking and security are struggling to keep pace as demand increases and boundaries become less defined. “Networking infrastructure, for example, has started to strain under AI traffic. But not only that, with AI use spreading to every corner of an organisation thereby extending the threat perimeter through both internal and external vectors, the traditional fragmented security solutions model is becoming too complex to manage,” says Björn.
Fragmentation and the limits of control
That question of fragmentation sits at the centre of how resilience is being tested. Organisations that rely on separate tools and siloed controls create conditions where AI can expose weaknesses rather than strengthen them. The more AI interacts with different systems, the more those gaps become visible.
“Organisations with fragmented networking and security stacks will struggle to manage AI securely. When policy enforcement, telemetry, identity controls, and API inspection are spread across disconnected systems, AI creates security gaps and new vulnerabilities,” says Björn.
The alternative is not simply consolidation, but coordination. Security and networking begin to converge around a shared framework that allows for consistent enforcement across edge, cloud, and data centre environments. This becomes less about simplifying architecture for its own sake and more about ensuring that activity can be seen and acted on as it moves across the environment.
The same tension appears in how AI systems are introduced and scaled. Some organisations have taken a more measured route, building governance structures alongside their AI initiatives, while others have moved quickly from experimentation to autonomous systems. The difference becomes visible when those systems begin to operate without sufficient architectural guardrails.
“It’s highly likely that these agent-first initiatives will end up being redesigned or abandoned. That won’t be because the underlying technology fails, but because governance was not integrated early. When autonomy outpaces architecture, organisations eventually face regulatory, security, or cost constraints that force redesign,” explains Björn.
Resilience as the outcome
As AI becomes embedded in business processes, the distinction between technical risk and operational risk begins to fade. Systems that influence supply chains, financial decisions, and customer interactions introduce a level of exposure that extends beyond traditional IT boundaries.
“The boundary between IT risk and business risk has collapsed, accelerated by AI’s deep integration into operations, decision-making, and customer engagement,” says Björn.
That shift is changing expectations for security leaders. The role is no longer confined to protecting systems, but extends to maintaining the integrity and availability of the processes those systems support.
“CISOs are no longer responsible only for securing systems. They are responsible for ensuring that AI-augmented business processes remain trustworthy, available, and controllable under stress,” he says.
It also reframes how value is assessed. Efficiency remains relevant, but it is not the primary lens through which AI is judged in security environments. As organisations operate across hybrid networks with increasing complexity, the focus turns to maintaining visibility, enforcing consistent controls, and responding at speed when conditions shift.
“AI in relation to cybersecurity has never been just about efficiency, but about resilience,” he says. “Resilience will favour leaders who prepare for AI-driven disruption, test their assumptions, and ensure their organisations can continue operating when automated systems fail.”






Discussion about this post