December 3, 2025
Nvidia’s NVLink Fusion Into AI Chips

Amazon to Integrate Nvidia’s NVLink Fusion Into AI Chips

Amazon Web Services (AWS) is making a major move to strengthen its role in the fast-growing world of artificial intelligence. At its annual cloud computing conference in Las Vegas, the company announced two important developments: future AWS chips will use Nvidia’s advanced NVLink Fusion technology, and new AI servers powered by Trainium3 chips are launching immediately. These updates highlight how AWS aims to stay competitive in a market where companies are racing to build bigger, faster, and more energy-efficient AI systems.

Nvidia’s NVLink Fusion Will Power AWS’s Next Trainium4 Chips

AWS shared that it will integrate Nvidia’s NVLink Fusion into its upcoming Trainium4 chip. While the company did not reveal a release date, this marks a major step forward. NVLink Fusion allows extremely fast communication between different chips, helping them work together more efficiently. This is especially important for training large AI models, where thousands of interconnected machines must share data in real time.

By adopting NVLink Fusion, AWS joins other major tech companies such as Intel and Qualcomm, who have also started using Nvidia’s interconnect technology. Nvidia, already a leader in AI computing hardware, has been pushing for wider use of NVLink Fusion to make AI systems more scalable and more powerful.

For AWS, this partnership will help build larger and more capable AI clusters, enabling customers to train advanced models with greater speed and reliability. The upgrade is expected to attract more enterprise customers who need high-performance AI infrastructure.

AWS Introduces “AI Factories” for Faster, Private AI Workloads

Along with the NVLink Fusion announcement, AWS unveiled a new offering called “AI Factories.” These AI Factories will give companies access to dedicated infrastructure located inside their own data centers. This allows businesses to run their most advanced AI workloads privately, with high speed and low latency.

Nvidia CEO Jensen Huang described the collaboration as a key moment in the global AI revolution. According to him, AWS and Nvidia are working together to build the “compute fabric” that will power advanced AI for companies around the world.

AI Factories are expected to help organizations that need custom setups or want to keep sensitive data on-premises while still benefiting from the computing power provided by AWS.

New Trainium3 Servers Deliver More Power With Less Energy

AWS is also bringing immediate upgrades to its hardware lineup with new servers powered by Trainium3 chips. These servers are available now and include 144 chips per unit. According to AWS, they offer more than four times the performance of the previous generation while using 40% less power.

Dave Brown, AWS vice president of compute and machine learning, said the company aims to compete not just on performance but also on price. He explained that customers want the best value for their investment — powerful chips that fit within their budget — and AWS intends to deliver that balance.

The combination of higher performance and improved energy efficiency makes Trainium3 servers especially attractive for large-scale AI tasks, where computing demands continue to rise.

Amazon Updates Its Nova AI Models With New Capabilities

Beyond hardware, Amazon also revealed major upgrades to its Nova family of AI models. The new version, Nova 2, is designed to be faster and more responsive. It also supports multimodal capabilities, allowing it to respond to text, images, video prompts, and even speech.

AWS also introduced a new speech-focused model called Sonic. Sonic can understand voice commands and generate natural, human-like speech responses. During the keynote presentation, AWS CEO Matt Garman highlighted its smooth and realistic audio output.

Although Amazon has struggled to compete with popular models like ChatGPT, Claude, and Gemini, the company has seen growing demand for its cloud and AI services. AWS reported a 20% increase in revenue in the latest quarter, driven largely by organizations building AI solutions on its infrastructure.

Nova Forge: Helping Companies Build Their Own AI Models

Another key announcement was Nova Forge, a new tool that allows businesses to create customized AI models using their own data. Instead of relying solely on general-purpose AI models, companies can use Nova Forge to tailor models to their industries, customers, and internal systems.

According to Matt Garman, Nova Forge helps create models that “deeply understand your information” while still maintaining the broad training they received initially. This means companies can get highly personalized AI behavior without losing the strengths of the base model.

Nova Forge is especially useful for sectors like finance, healthcare, retail, and manufacturing, where businesses often rely on unique datasets and specialized workflows.

AWS Strengthens Its Position in the AI Infrastructure Race

The announcements made at the Las Vegas conference show AWS’s clear strategy: to remain a top provider of AI hardware, software, and cloud services. With new chips, server upgrades, enhanced AI models, and tools for customizing enterprise models, AWS is positioning itself for the next phase of AI growth.

The addition of Nvidia’s NVLink Fusion to future Trainium4 chips sets AWS up to support the next generation of extremely large AI models. Meanwhile, the newly launched Trainium3 servers give customers immediate access to more powerful and efficient computing.

As companies around the world continue adopting AI at a rapid pace, AWS is building a full ecosystem to meet those needs — from chips and servers to AI models and development tools. These moves put Amazon in a strong position to compete with other tech giants and support the expanding global demand for AI infrastructure.