AWS re:Invent has returned to Las Vegas, bringing together tens of thousands of cloud leaders, engineers, and enterprise executives for a week of technical sessions and strategy updates. It’s a familiar rhythm for the industry—one of the most watched checkpoints in the enterprise technology calendar.
AWS enters this year’s event with an expanded footprint and a sharpened focus. The company has reached $132 billion in annual recurring revenue, and its infrastructure now supports the majority of the world’s largest organisations. More than 90 percent of Fortune 100 companies—and most of the Fortune 500—use AWS for cloud, security, analytics, and AI workloads, a reminder of how deeply embedded the platform has become in global enterprise operations.
Against this backdrop, Matt Garman delivered his much-anticipated re:Invent keynote. His opening remarks cut directly to the moment the industry finds itself in.
“Every single customer experience, every single company, frankly every single industry is in the process right now of being reinvented,” he said. “We’re still in the early days of what AI is going to deliver.”
He acknowledged the gap many enterprises feel between the promise of AI and the returns they’re seeing. “When I speak to customers, many of you haven’t yet seen the returns that match up to the promise of AI,” Garman said. “The true value of AI has not yet been unlocked.”
The inflection point, he argued, is the shift away from assistants and toward autonomous AI agents. “AI assistants are starting to give way to AI agents that can perform tasks and automate on your behalf,” he said. “This is where we’re starting to see material business returns.”
He also reiterated AWS’s belief that the coming years will see an explosion in agentic workloads: “I believe that in the future, there’s going to be billions of agents inside of every company and across every imaginable field.”
Before diving into the launches, Garman grounded the conversation in the infrastructure that underpins AWS’s AI ambitions. “All of this starts with the foundation of secure, available and resilient planet-scale infrastructure that is frankly unmatched anywhere,” he said. He cited AWS’s 38 Regions, 120 Availability Zones, annual capacity additions of 3.8GW, and more than nine million kilometres of fibre optic cable. “That’s enough optical cabling to reach from the Earth to the Moon and back over 11 times.”
The keynote then moved into a series of launches that outline AWS’s 2025 direction across agents, silicon, developer tooling, hybrid cloud, and modernisation.

Key announcements from Matt Garman’s re:Invent 2025 keynote
1. Frontier Agents: AWS’s new class of autonomous AI for enterprise work
The centrepiece of the keynote was Frontier Agents, a new category of AI agents designed to autonomously execute complex tasks, scale across workflows, and run for hours or days.
“These agents are autonomous,” Garman said. “You direct them towards a goal, and they figure out how to achieve it.”
AWS introduced three Frontier Agents:
• Kiro Autonomous Agent — full-stack development automation
The agent maintains context across repositories, plans tasks, updates code, tests changes, and generates merge-ready pull requests.
In one demonstration, it updated a shared library across 15 microservices, ran full test suites, and produced 15 merge-ready PRs—completely in the background.
• AWS Security Agent — continuous, upstream security
The agent reviews design documents, scans code, checks pull requests, and performs on-demand penetration tests.
“It embeds security expertise upstream,” Garman said, allowing teams to secure systems more frequently and earlier in the process.
• AWS DevOps Agent — autonomous operations and incident response
It monitors systems, detects issues, traces root causes, and recommends remediations.
The example shown: elevated authentication failures traced to a misconfigured IAM policy introduced via CDK—diagnosed and patched automatically.
Together, the Frontier Agents reflect AWS’s view that enterprises need AI systems that do work, not simply generate text.
2. Amazon Q expands for developers, workers, and enterprise automation
AWS announced major updates to Amazon Q, positioning it as the enterprise-wide copilot suite for developers, business users, and internal teams.
• Amazon Q Developer
New reasoning capabilities, tighter integration with Kiro, and deeper understanding of application architectures. Q can now diagnose issues, propose patches, summarise logs, and explain complex code paths in natural language.
• Amazon Q Apps
Business users can now build applications by describing workflows conversationally. Q generates:
- the application logic
- the UI
- the connectors
- and the data interactions
Q Apps is now one of AWS’s simplest pathways to internal automation.
These updates align directly with the keynote’s theme of AI moving from experimentation to operational output.
3. Kiro becomes Amazon’s official AI development environment
AWS revealed that Amazon has standardised internally on Kiro for all AI-enabled development work.
Garman shared a telling example: a re-architecture project originally estimated at 30 developers over 18 months was delivered by a 6-person team in 76 days using Kiro’s agentic workflow.
“This isn’t a 10 to 20 percent efficiency gain,” he said. “This is orders of magnitude more.”
AWS also announced a one-year Kiro offer for startups, up to 100 seats.
4. AWS Transform expands into full-stack AI-powered modernisation
Technical debt remains one of the costliest burdens for enterprises. Garman cited Accenture’s estimate that tech debt costs US organisations $2.4 trillion annually.
AWS has expanded Transform, its AI modernisation engine, to support:
- proprietary internal languages
- large ERP and supply chain systems
- runtime upgrades
- monolith-to-microservices transitions
- Angular-to-React rewrites
- containerisation and OS migrations
Transform Custom now allows customers to define their own patterns and apply them at scale.
Transform customers have already analysed 1+ billion lines of mainframe code and are rewriting 5 million Windows lines per month as they move to Linux.
5. Amazon Connect gets new AI-powered customer experience features
AWS introduced new AI enhancements for Amazon Connect, its cloud contact centre platform. These include:
- real-time agent assistance
- improved sentiment analysis
- AI-driven call summarisation
- advanced routing based on intent
- retrieval of context from enterprise systems
These capabilities integrate with Bedrock models and Frontier Agents, allowing organisations to build end-to-end AI-enhanced customer experience workflows—from intake to resolution.
6. AI Factories bring AWS AI into customer data centres
AWS announced AI Factories, dedicated environments that allow customers to run AWS AI infrastructure directly in their own facilities.
AI Factories operate like a private AWS Region with access to:
- Trainium UltraServers
- NVIDIA GB300 UltraServers
- Bedrock and SageMaker
- AWS networking and control planes
“These environments operate exclusively for each customer,” Garman said, making them especially relevant for regulated industries and sovereign cloud requirements.
7. New AI silicon: Trainium 3 UltraServer and next-generation GPUs
AWS unveiled Trainium 3 UltraServer, a major upgrade designed for the next wave of large-scale training and inference workloads. Trainium adoption has accelerated rapidly, with more than one million chips now deployed.
AWS also announced new NVIDIA GB300-based P6 instance families, paired with continued improvements in GPU reliability.
“We sweat the details—minor things like debugging BIOS to prevent GPU reboots,” Garman said. “Nothing is too small.”
8. Amazon Nova family and the launch of Nova Forge
AWS expanded the Nova model family for reasoning, multimodal intelligence, and speech-to-speech generation.
With Nova Forge, enterprises can build tailored models by combining:
- proprietary datasets
- AWS-curated data
- pre-trained checkpoints
The approach gives enterprises flexibility without requiring full custom training infrastructure.
9. Bedrock models integrate directly into Writer
Bedrock’s reach expanded with support for Writer’s agentic platform. Models from Bedrock—including Palmyra and the new Nova family—can now be used natively inside Writer for enterprise workflows, content automation, and AI agent development.
10. Quantum advances with Ocelot
AWS highlighted progress on Ocelot, its quantum computing prototype chip.
“Ocelot is a breakthrough in making quantum a reality,” Garman said, emphasising its 90 percent reduction in the cost of quantum error correction—one of the biggest bottlenecks in practical quantum computing.
11. Reinforced commitment to startups
Garman reiterated AWS’s centrality to the global startup ecosystem.
“More unicorn startups have been built on AWS than anywhere else,” he said.
He highlighted that:
- 85 percent of Forbes 2025 AI 50
- 85 percent of CNBC Disruptor 50
- and over half of all VC-funded startups worldwide
run on AWS infrastructure.
“Thank you to all of you, the innovators out there.”
AWS’s enterprise AI roadmap
Garman’s 2025 keynote was a structured articulation of AWS’s priorities: AI agents that execute work end-to-end, infrastructure optimised for scale and flexibility, modernisation powered by AI, and sovereign architectures that match regional regulations.
Enterprises spent 2024 experimenting with AI. AWS is positioning 2025 as the year they operationalise it—across development, security, operations, customer experience, and hybrid cloud.
The message was clear, and the product launches backed it: AWS wants to be the platform where enterprise AI moves from pilots to productivity.






Discussion about this post