For much of the past decade, internet safety sat largely within the remit of IT teams, platform providers, and regulators. Risks were framed in technical terms, controls were infrastructure-led, and responsibility was often delegated down the organisation. That model no longer reflects how the digital world operates.
By 2026, the internet is inseparable from everyday business activity. Identity has replaced the perimeter. AI systems sit inside workflows rather than at the edge. Employees, customers, and partners interact continuously with intelligent systems that influence decisions, behaviour, and trust—often without clear visibility into how those outcomes are shaped. In this environment, digital safety is no longer a purely technical concern; it has become an operational and leadership issue.
This is the context in which Safer Internet Day 2026 is being marked. This year’s theme, ‘Smart tech’, safe choices – exploring the safe and responsible use of AI, reflects a growing consensus across the security and technology industry: the challenge is no longer whether AI is being adopted, but how responsibly it is being used.
From infrastructure risk to behavioural risk
Enterprise security strategies were traditionally built around infrastructure—networks, endpoints, and applications. Today, risk is increasingly behavioural. It emerges from what users trust, what they authorise, and what systems are permitted to act autonomously.
AI has accelerated this transition, as generative models influence how people search, learn, communicate, and make decisions. Inside organisations, AI tools are embedded across productivity platforms, analytics workflows, and customer engagement systems, often faster than policies can keep pace. The gap between formal controls and real-world use has become a material source of exposure.
This behavioural layer is now where digital safety begins—or breaks down.
Meriam ElOuazzani, Regional Senior Director for the Middle East, Turkey, and Africa at SentinelOne, frames the issue in terms of awareness rather than restriction. “Making the proper judgments is important, as the topic ‘Smart tech, safe choices’ reminds us. Since kids and teenagers commonly utilise AI tools, their safety is more determined by the decisions they make than by filters or settings.”
While her focus is youth digital safety, the parallel for enterprises is clear. Policies and safeguards matter, but they are effective only when people understand how technology shapes behaviour. “AI can quietly affect behaviours, views, and trust while supporting innovation and learning,” says ElOuazzani. She notes that this dynamic applies just as strongly to employees using AI-assisted tools as it does to younger users navigating online platforms.



Identity has Become the primary attack surface
As behaviour defines exposure, identity defines access. Across sectors, security leaders are seeing the same pattern: attackers are no longer primarily exploiting technical vulnerabilities; they are gaining entry by compromising credentials.
Sophos’ upcoming Active Adversary Report highlights the scale of the issue. Compromised credentials accounted for 42.06 per cent of attacks in 2025, making them the most common initial access vector. This reflects how automation and AI have reshaped social engineering, allowing attackers to operate with speed, precision, and volume.
“The way attackers are using automation and generative AI to massively increase the speed and volume of their attacks suggests that attacks will become faster and more sophisticated,” says John Shier, Field CISO for Threat Intelligence at Sophos. “The best approach to protecting our identities and digital data is to take a proactive stance on defence.”
What makes this challenge particularly difficult is that attackers are targeting people rather than systems. “Criminals are increasingly targeting people rather than devices,” Shier explains, pointing to AI-generated phishing emails, messages, and voice-based scams designed to bypass instinctive scepticism.
For enterprises, this has forced a reassessment of priorities. Patch management and endpoint security remain necessary, but they are no longer sufficient on their own. Credential hygiene, phishing-resistant authentication, and user awareness have become frontline controls, inseparable from broader security strategy.
Enterprise AI and the visibility gap
Alongside identity risk sits a different but related challenge: visibility. Many organisations are adopting AI faster than they can account for it.
AI now appears across approved platforms, shadow IT tools, third-party services, and embedded agents operating with varying degrees of autonomy. For security and risk teams, the immediate concern is not model performance, but knowing where AI exists and what it can access.
Chris Cochran, Field CISO and Vice President of AI Security at SANS Institute, describes visibility as the starting point. “The first step is visibility. Organisations need to understand where AI is being used across the business—not just the tools leadership approved, but the AI showing up in workflows, plugins, agents, and third-party platforms.”
That visibility must extend beyond surface-level deployment. “Maintaining an AI inventory is quickly becoming table stakes,” Cochran adds, highlighting the importance of understanding models, data sources, and external dependencies through mechanisms such as AI Bills of Materials.
As AI agents take on greater autonomy, traditional service-account assumptions begin to fail. “Agents should be treated like operators on the network, not traditional service accounts,” Cochran explains. He highlights the need for explicit identity, short-lived authentication, and continuous monitoring. In practice, this shifts security from static trust models to controlled, observable access.
From awareness to accountability
What connects youth digital safety, credential abuse, and enterprise AI governance is a shared reality: digital safety now depends on accountability.
Technology remains essential, but it no longer determines outcomes on its own. Decisions made by users, employees, leaders, and autonomous systems shape exposure every day. Controls work only when they are understood, applied consistently, and aligned with how technology is actually used.
ElOuazzani emphasises this shift succinctly: “When we give children knowledge and cyber awareness, they use technology the right way. The only way to be safe in an AI-driven future is to make wise judgments for both people and machines.”
The digital world is now woven into how we learn, work, and operate as organisations. Making the internet safer is no longer about limiting access or reacting to threats after the fact. It is about equipping people to make better decisions, ensuring systems operate within clear boundaries, and recognising that responsibility does not sit with technology alone. As Safer Internet Day 2026 reminds us, safety in an AI-influenced world is built through judgment, governance, and everyday choices that shape trust at scale.






Discussion about this post