Google Unveils 2 New AI Chips to Take on Nvidia

Photo of author

By Ronald Tech

Key Points

  • Google just unveiled its latest homegrown chips, the TPU 8t and TPU 8i.

  • These purpose-built AI processors provide performance and efficiency upgrades for cloud-based AI operations.

  • While the chips will likely cushion Alphabet’s bottom line, they probably won’t dent Nvidia’s data center dominance.

  • 10 stocks we like better than Alphabet ›

Nvidia (NASDAQ: NVDA) made a name for itself by pioneering the graphics processing units (GPUs) that became the gold standard for rendering images in video games. The company adapted these chips to handle the rigors of artificial intelligence (AI), a field it now dominates. Unfortunately, being the leader means there’s always someone trying to take you down.

Alphabet‘s (NASDAQ: GOOGL) (NASDAQ: GOOG) Google just unveiled two powerful new AI chips, the latest move in the company’s efforts to become a greater force in AI.

Will AI create the world’s first trillionaire? Our team just released a report on the one little-known company, called an “Indispensable Monopoly” providing the critical technology Nvidia and Intel both need. Continue »

A split image with the Alphabet and Nvidia logos superimposed over pictures of their respective headquarters buildings.

Image source: The Motley Fool.

A chip off the old block

In a blog post released on Wednesday, Amin Vahdat, senior VP and chief technologist for AI and infrastructure, unveiled the eighth generation of Google’s tensor processing unit (TPU), the company’s custom-built silicon for AI. In a move that surprised market watchers, he announced two distinct architectures — one for AI training and the other for AI inference.

“With the rise of AI agents, we determined the community would benefit from chips individually specialized to the needs of training and serving,” Vahdat wrote. To that end, Google introduced the TPU 8t and the TPU 8i. Not surprisingly, the TPU 8t was designed for “compute-intensive training workloads.” At the same time, the TPU 8i comes equipped with more memory to reduce the inherent latency (lag) in interactions between AI agents. Perhaps most importantly, Google notes that “specialization unlocks significant efficiencies and gains.”

Specifically, TPU 8t was built to reduce the time it takes to develop frontier models from months to weeks. It accomplishes this herculean task by delivering 3 times the compute performance, 10 times faster storage access, and double the chip data transfer rate than its predecessor.

On the other hand, the TPU 8i was built with inference in mind — the job an AI model does after it’s been trained. The age of AI agents will involve complex multi-step tasks, at times requiring multiple agents to collaborate to complete them more quickly.

The processor combines high-bandwidth memory (HBM) with 3 times the amount of static random-access memory (SRAM), reducing the lag caused by data transfers between chips. The TPU 8i also benefits from the addition of Google’s custom Arm-based Axiom CPU, which has been customized for the job.

See also  Applied Digital Shares Are Up Today: What's Going On? - Applied Digital (NASDAQ:APLD)

Google say these innovations “deliver 80% better performance-per-dollar” compared to the previous generation, giving customers twice the volume for the same cost.

Will TPUs “chip” away at Nvidia’s lead?

It’s important to note that Alphabet remains heavily reliant on Nvidia’s GPUs and is among the chipmaker’s largest customers. In fact, Google introduced the first version of its TPU in 2016, and these custom chips have since become a cornerstone of the company’s cloud and AI strategy — while still depending on Nvidia for the heavy lifting.

These TPUs mark the latest in a long line of processors developed by competitors to decrease their reliance on Nvidia’s GPUS. Despite rising competition, Nvidia is estimated to control 92% of the data center GPU market, according to IoT Analytics. While Google’s TPUs may offer cost benefits for AI workloads, Nvidia’s GPUs still provide the greatest computational horsepower, powering both AI training and inference.

From a practical standpoint, these TPUs give Alphabet flexibility. The company can offer more affordable AI processing options for price-sensitive cloud customers, while also reducing its own energy consumption, thereby lowering costs and boosting profits. Moreover, at 31 times earnings, Alphabet stock is attractively priced compared to a multiple of 41 for Nvidia.

To be clear, I am a firm believer in Nvidia and Alphabet, seeing both as leaders in the AI boom — which is why I own both stocks.

Should you buy stock in Alphabet right now?

Before you buy stock in Alphabet, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Alphabet wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $499,277!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $1,225,371!*

Now, it’s worth noting Stock Advisor’s total average return is 972% — a market-crushing outperformance compared to 198% for the S&P 500. Don’t miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.

See the 10 stocks »

*Stock Advisor returns as of April 22, 2026.

Danny Vena, CPA has positions in Alphabet and Nvidia. The Motley Fool has positions in and recommends Alphabet and Nvidia. The Motley Fool has a disclosure policy.