Organisations in the United Arab Emirates (UAE) have recognised that agentic AI is no passing fad. One estimate puts its UAE market size at US$34.1 million in 2024 and projects an almost 44% year-on-year surge to reach US$49.1 million in 2025. A five-year CAGR of 48.3% is expected to lead to a US$352.2-million market in 2030. We find ourselves in a growing ecosystem of autonomy, so we should be very clear about everything agentic AI is and everything it is not.

Agentic AI is independent and persistent. It can act as a valet to employees or customers. It can be a compliance steward, a finance strategist, an engineer, or a diagnostician. However, agentic AI is not risk-free. The enterprises that take the plunge with AI agents must grant them far-reaching authority if they are to behave autonomously. That requires a level of trust that must come with caveats. To wind up an AI agent and let it go without due oversight is to invite catastrophe. We should never fall into the trap of assuming AI agents cannot make mistakes or be compromised by malicious parties. So, let’s consider a 10-point security plan for onboarding agentic AI so your organisation can reap the benefits while side-stepping the risks.
1. Treat each agent as an identity
In the simplest case, a threat actor needs only to hijack the security credentials of an AI agent to wreak havoc on an environment. AI agents should be considered machine identities with all the uniqueness and traceability that implies. Each should be assigned a human owner to act as a first-instance contact if an incident occurs.
2. Apply the principle of least privilege
Just like humans, AI agents have a job to do. But each agent should be granted only enough access to systems to do that job. Generic or catch-all permissions leave too many doors ajar for attackers, so adopters of agentic AI must take care not to over-provision access for the expediency of the AI journey. Instead, use role-based access control (RBAC) or attribute-based access control (ABAC) and frequently review privileges to ensure unnecessary ones are removed.
3. Implement just-in-time (JIT) access
Agents, like humans, eventually complete their tasks and no longer need the systems access that was necessary to perform them. Persistent access is a common weak link in identity security. JIT access is an established best practice that calls for the revocation of credentials when tasks are complete. Secrets management solutions and privileged access management (PAM) platforms can help with orchestrating diligently logged, auditable JIT access.
4. Continuously authenticate
The authentication of any person or autonomous digital process cannot be treated as a one-door event. Many damaging incidents can occur mid-session. Sessions should be monitored for any changes in agent behavior, and each escalation of privileges should be accompanied by additional authentication layers. Organisations should pay special attention to sensitive milestones like financial transfers or reconfigurations. Any unwarranted privilege escalations or exfiltration of sensitive data are worthy of red flags.
5. Secure the supply chain
Agentic AI often uses other AI and data sources either through APIs or SaaS integrations. A best-practice audit of all third-party systems and resources is advisable. Any AI model or data pipeline that is used without validation of cryptographic signatures will constitute a risk. The reputation of the provider is also relevant. Organisations should embrace CVE monitoring and scan dependencies for vulnerabilities. Secrets management solutions can be used to store and administrate third-party API keys rather than embedding them in code or in unencrypted text files.
6. Establish network and identity boundaries
Agentic AI must be restricted in the resources to which it can connect. Access control lists and brokered communication through MPC servers will ensure that in the event of a compromise, an agent cannot be used for lateral movement. Enterprises should make use of sandbox development environments and separate control and data planes for production.
7. Maintain audit trails
Log everything for the day of accountability. If a cyber incident occurs, investigators must be able to see what each agent has been doing. This can only be done using encrypted, restricted, tamperproof logs.
8. Integrate agentic AI into ITDR
Unlike humans, AI agents can be part of the threat actor’s arsenal, acting on their behalf in real time. By streaming agent logs into identity threat detection and response (ITDR) systems (or SIEM, XDR, or SOAR platforms), UEBA (user and entity behavior analytics) can analyse telemetry as if it originated with a human, albeit using different baselines. For example, suspicious behavior for an AI agent might be behaving too much like a human.
9. Mandate human oversight
Human-in-the-loop is crucial for high-risk milestones. Agents should not, for example, be allowed to grant their own privileges. Be sure to identify the most sensitive workflows in your organisation, such as financial or security, and make them subject to human assessment. Back this up with policies and escalation procedures and allow session shadowing so human agents can step in to prevent unfolding disasters.
10. Train users to be risk-aware
Humans are involved not only in the use of agentic AI but in its governance. That workforce must understand how to construct policy, how to escalate incidents, and how to safely design, implement and use AI. Make sure staff can identify AI risks. Run threat simulations and red-team exercises to show the consequences of errors.
A secure AI ecosystem
Identity security has become the foundation of a mature threat posture in 2025. You will note that many of the urgings here are repurposed from those of humans and other machine identities. The difference between these identities and agentic AI is scale and speed, and those same differences apply to the harm that could be caused by compromised AI agents. AI agents can achieve more in a second than a human can in a year. And cyber compromise at the speed of light is something about which every organisation should be concerned.






Discussion about this post