Palo Alto Networks has advanced its AI security platform with the launch of Prisma AIRS 3.0. As organisations transition toward a future defined by autonomous agents, Prisma AIRS 3.0 secures the entire Agentic AI lifecycle – enabling enterprises to move from simply observing AI interactions to safely authorising autonomous execution.
The shift toward an AI-powered enterprise introduces systemic security challenges – ranging from unmanaged Shadow AI to the critical new frontiers of agentic identity, runtime security, and automated governance. While many enterprises monitor what AI says, they remain blind to what AI does. Prisma AIRS 3.0 closes this gap, providing visibility and securing agents from design to runtime as they execute complex tasks independently.
Tarek Abbas, Senior Director, Technical Solutions, EMEA South, Palo Alto Networks, said: “Organisations in the Middle East are keen to deploy AI to improve their operations and services, and support the ambitious targets of the regional leaders. However, many also recognise the potential security challenges that Agentic AI introduces, from managed agentic identities to unpredictable runtime behaviors. Prisma AIRS 3.0 provides a comprehensive platform to discover, assess and protect agentic AI, giving our customers the unique ability to confidently, and securely, scale the AI-powered enterprise.”
Prisma AIRS replaces fragmented point solutions with a single platform to manage the primary threats and risks of AI apps and autonomous agents. The new capabilities allow teams to future-proof their operations as agent ecosystems evolve:
- Discover AI Agents wherever they live. Organisations can now instantly inventory AI agents, models, and connections across their entire environment. Prisma AIRS identifies agents running in cloud environments, SaaS platforms and locally on endpoints that traditional tools miss.
- Assess AI Agent risk continuously. Security teams can stop guessing if an agent is safe. Agent Artifact Security maps out an agent’s architecture and scans for vulnerabilities. AI Red Teaming for agents simulates context-aware agentic attacks, discovers AI-related vulnerabilities, and recommends runtime security policies.
- Protect AI ecosystems in real-time at scale. The AI Agent Gateway, currently available in limited preview, provides a central control plane to enforce agent runtime and identity security, governance and observability.
Evolution of Prisma Browser built for Agentic AI
In tandem, Palo Alto Networks also unveiled a major evolution of Prisma Browser, introducing the industry’s most secure browser built for the Agentic AI era. As employees shift from merely using AI as a tool to now utilising autonomous agents that act on their behalf, Prisma Browser converts the web into a secure AI-driven workspace. Users can now unlock new levels of productivity with Agentic AI, without compromising security.
A new class of sophisticated risks unique to autonomous AI has emerged, such as shadow AI agents, prompt injection attacks and agent hijacking. Prisma Browser paves the way for this new era of work by providing agentic capabilities in combination with a secure foundation to protect these autonomous workflows.
Prisma Browser introduces key innovations that bring secure agentic AI to end users by:
- Powering the agentic workspace: Enables organisations to leverage the LLM of their choice across all models and platforms. Prisma Browser allows teams to utilise the most effective AI tools for any specific task.
- Securing AI interactions: Automatically discovers user AI activity and enforces content-aware boundaries to keep agents within their intended scope.
- Preventing agent hijacking: Identifies and blocks prompt injection attacks—including malicious instructions hidden within websites designed to hijack AI agents—keeping automated workflows on track and preventing agents from being manipulated into unauthorised actions.
- Enabling global compliance: By assessing the intentions of both human and non-human identities, Prisma Browser enables total accountability and compliance with evolving global AI regulations.






Discussion about this post