Running AI that transforms the bottom line requires more than deployment. Joe Dunleavy, Regional CTO and Global Head of the Dava.X AI group at Endava, outlines what it takes to manage AI systems at scale
AI has moved into day-to-day operations in a way that leaves little room for loose execution. It is shaping decisions, feeding into customer journeys, and sitting inside systems that businesses rely on. The question is no longer whether it works, but whether it can be managed with enough control and consistency to deliver real outcomes.
That is where organisations begin to diverge. Some continue to treat AI as something that can be layered onto existing systems. Others have started to rethink how it is built, measured, and run. The difference becomes visible once systems move beyond controlled environments.
In conversations around GenAI, the focus still tends to drift toward models, which could be a distraction from what actually determines success, according to Joe Dunleavy, Regional CTO and Global Head of the Dava.X AI group at Endava.
“Specific to Gen AI, what we’re seeing very clearly is that successful AI scale starts with strong data and is less about models,” he says. “Organisations that get this right invest early in acquiring, integrating, and preparing high-quality ‘gold standard’ data, and crucially, in putting systems in place to continuously measure AI outputs. Without that foundation, scale actually becomes a risk. Poor data propagates at machine speed, and fixing it often means additional costs and lost opportunity.”
Weak data does not stay contained, instead, it feeds into workflows and decisions, affecting outcomes that are difficult to trace or correct. Organisations making progress are treating data as something that requires ongoing control rather than a one-time exercise.
However, data alone does not resolve the issue. How teams are organised and how work moves across functions becomes just as important. AI tends to expose silos that previously slowed delivery but did not necessarily break it.
“Structurally, the organisations making progress are breaking down silos. They’re bringing together business, IT, and data teams into unified delivery models, supported by standardised tools and governance frameworks. That reduces friction and improves visibility into AI performance,” explains Dunleavy.
Progress also tends to come from narrowing the focus. High-impact use cases are prioritised first, then expanded iteratively, allowing systems to stabilise before scaling further.
“When innovation accelerates at breakneck speed, thoughtful regulation becomes a stabilising force”
Joe Dunleavy, Regional CTO and Global Head of the Dava.X AI group, Endava
Beyond deployment
Getting a system into production signals the start of a more demanding phase. Systems have to be monitored, adjusted, and kept aligned with changing business needs.
While this stage counts as progress, it often marks the start of a more complex phase. At scale, the challenge is keeping these systems useful, safe, and cost-effective over time. This requires a deliberate operating model and methodology.
“We typically think about this in four parts,” explains Dunleavy. “The first is performance and user satisfaction. You need to track not just technical metrics like latency and accuracy, but also how users experience the system, and continuously refine both.”
Cost tends to follow quickly. Left unchecked, agentic systems can become expensive, particularly as usage scales and model choices expand.
“Second is cost optimisation. Agentic systems can become expensive quickly, so leaders need to actively manage model choice, infrastructure, and usage patterns,” he adds.
The third aspect to consider is lifecycle management, as these systems need to be treated as something that evolves.
“These systems can’t be static. They need structured retraining, versioning, and, where necessary, controlled rollback,” says Dunleavy. “And finally, continuous improvement. The organisations doing this well are embedding feedback loops that allow these systems to evolve alongside business needs, not drift away from them.”
Running these systems over time starts to change how their value is judged. Efficiency has been the easiest way to justify early AI investment, but this falls short as expectations rise.
“This really gets to the heart of a shift many organisations are now grappling with,” he explains. “Until relatively recently, the focus of AI endeavours has almost entirely been on productivity, automation, and efficiency gains. And while those are real, they’re ultimately incremental and offer only a fleeting competitive edge. They optimise business, but don’t necessarily transform it. Instead of asking ‘how do we do this faster?’, they’re asking ‘what can we now do that wasn’t possible before with high quality built in?’”
AI needs to be measured against business outcomes rather than technical milestones. Revenue growth, customer acquisition, and speed of innovation become the real indicators of impact. In a cost-conscious environment, marginal gains don’t justify investment. It has to prove its ability to drive transformation and ultimately increase revenue.
Control and accountability at scale
As AI systems take on more responsibility, accountability becomes harder to defer. It needs to be addressed before systems are widely deployed.
“Accountability has to be designed in from the outset. It can’t be something you retrofit once systems are already operating at scale. At Endava, we advocate for putting in layered controls. That includes human-in-the-loop oversight for critical decisions, formal risk management frameworks, and safeguards implemented in what we call ‘policy as code’ as part of Dava.Flow,” says Dunleavy.
Internal controls are only part of the story. Accountability also needs to extend to the customer. “Consider an AI system that declines a loan or flags a transaction,” he says. “For such high-impact use cases, there must be a clear and accessible path for that decision to be challenged and reviewed by a human, potentially even the end customer. That transparency and detailed testing is critical.”
Rather than slowing adoption, these controls create the conditions for systems to operate with confidence.
“When innovation accelerates at breakneck speed, thoughtful regulation becomes a stabilising force. This isn’t about slowing progress. It’s about sustaining it.”
What’s becoming non-negotiable is clear ownership. Leaders need to move beyond shared responsibility models and define who is ultimately accountable for AI outcomes, especially as roles begin to overlap in new ways.
“CIOs and CTOs need to understand risk and governance. CISOs need to engage with data and AI. And CDOs need to connect data strategy directly to business value. In short, AI is forcing a convergence of roles and the leaders who succeed will be the ones who can operate across these boundaries, not within them.”
AI does not sit neatly within a single function, yet responsibility often remains shared across teams. That creates gaps when decisions need to be made or when issues arise.
HPE announced the availability of HPE Compute Scale-up Server 3250, a purpose-built server for in-memory databases, engineered to deliver scalability,...
Discussion about this post