At AWS re:Invent 2025, the conversation across the show floor feels markedly different from even a year ago. The era of exploratory pilots and isolated proofs of concept is giving way to a more grounded reality: enterprises are now focused on how to scale AI, govern it and tie it directly to business value. Leaders are no longer wondering whether AI will matter — they are wrestling with the practicalities of operationalising it across complex, distributed organisations.
Few people see this transition unfold as closely as Shaown Nandi, Director of Technology at AWS, who leads field technology teams and works directly with customers as they modernise and reimagine their businesses. It gives him a clear vantage point into what enterprises are struggling with and what they are getting right. “I spend a lot of time talking to CXOs,” he says. “Over the last three years, it’s been all AI discussions.”
From hype to measurable value
Nandi has watched the industry move from enthusiasm to discipline. “Boards or executives were saying, ‘I have to do something with AI, and I want it to be cool.’ That’s not how business works,” he says. The shift he sees now is driven by a more rigorous mindset: enterprises are asking what problem they are solving and what outcome they expect. “You need returns,” he adds, emphasising that the days of open-ended experimentation are over.
This clarity is equally important when it comes to people. Many early AI programmes landed poorly because organisations underestimated the human impact. “You weren’t thinking about how to take your employees, your teams, your customers on the journey,” he says. That oversight created fear and resistance, but Nandi notes that the companies accelerating fastest today are those who frame AI as a capability for employees, not a replacement for them. “Your employees are builders. Everyone’s building things now to help you go on this journey.”
Build vs. buy: the new rules of differentiation

The enduring debate — should enterprises build AI systems or buy them? — now has sharper edges.
Nandi’s view is simple: build where it differentiates you, buy where it doesn’t.
“You should build the things that are special to your business, that give you an advantage or a differentiator,” he says. But horizontal capabilities, such as HR systems or workflow tooling, rarely justify proprietary development.
What’s interesting is how the definition of “who can build” has changed. Conventional wisdom once held that only large enterprises had the resources to develop AI systems. Nandi no longer sees that as true. Democratised tooling has lowered the barrier dramatically.
“You can build much more quickly than you could ever build in the past,” he says. Startups and mid-sized firms can now assemble AI workflows with a blend of development tools, reasoning agents, and infrastructure patterns that previously required specialist teams. The calculus has shifted: capability now matters more than company size.
For all the noise around models and agents, Nandi is clear on one thing: the enduring advantage remains data.
“What makes you special is your data,” he says. Years of cloud storage strategy, data modernisation, and governance investments are finally yielding a competitive edge. AI is making it possible to monetise data in ways many enterprises once considered unrealistic — from granular personalisation to automated analysis to market expansion.
He also highlights how AI is widening international opportunity. Localisation that once demanded major investment — language support, regulatory understanding, customer handling — is now far more accessible. “AI makes those things much more possible,” he says, and that opens doors for enterprises that once hesitated to enter new markets.
Agents: ambition is good, but sequencing matters
Agentic systems have dominated re:Invent conversations this year, seen as a pathway to automating complex workflows end-to-end. Nandi believes in their potential but warns enterprises against over-extending too early.
“I encourage companies to think big… but start small,” he says. Rather than multi-year transformation plans, he urges teams to take a large process, carve off a small slice, and deploy an agent against it. “You’ll learn from that, get immediate returns, and then you can scale it.”
What many leaders still struggle with is letting go. “It’s incredibly common,” he says of companies clinging to investments that no longer make sense. “When I build something, my first inclination is I want to love it, keep it — it’s my thing that I built. Humans feel that way.” But AI demands a different rhythm. “If something doesn’t return value, stop it. Go build something else,” he says. “The credit is not the thing you built — it’s the impact you had on that problem.”
As enterprises begin embracing this faster, more iterative approach to AI, another shift is taking place beneath the surface — the economics of running these systems is changing just as rapidly.
The next phase: cost, scale, and quiet ubiquity
Asked about what comes next, Nandi highlights the economic shift underway. Targeted agents, specialised models, and purpose-built infrastructure are converging to dramatically reduce the cost of running AI systems. He points to recent remarks from AWS leadership predicting inference costs could fall by 90 percent in the coming years.
The impact of that shift is enormous: projects that once seemed marginal suddenly become viable at scale. Entire categories of workflows — from analysis to customer operations — could be automated affordably.
Looking ahead, Nandi doesn’t expect companies to brand themselves as “AI companies.” Instead, AI will become embedded, expected, and pervasive. The competitive advantage will lie not in claiming AI leadership, but in deploying systems that are secure, private, efficient, and tied tightly to business outcomes.






Discussion about this post