Tech: AI birthdays, ‘code red’ and the Google empire strikes back – The Edge Malaysia

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
This article first appeared in The Edge Malaysia Weekly on December 8, 2025 – December 14, 2025
ON Nov 30, artificial intelligence (AI) chatbot ChatGPT officially turned three. There were no celebrations or uncorking of champagne bottles at its creator OpenAI’s headquarters in Mission Bay, San Francisco, this past week. Indeed, the mood was also sombre 72km away in Santa Clara, the home of the world’s largest firm, chip giant Nvidia, which supplies the accelerator chips that have helped power the ongoing AI boom. Blame it all on party poopers search giant Google and start-up Anthropic AI, once considered laggards in the AI marathon that now suddenly find themselves in the driver’s seat and seem to have momentum.
Instead of ordering cake, Dom Perignon or announcing bonus share options for its staff, OpenAI’s CEO Sam Altman in an internal memo declared a “code red” general state of crisis to marshal more resources towards improving ChatGPT as competitive pressure from Google and other AI rivals begins to intensify.
Ironically, three years ago, it was Google that was forced to sound the “code red” alert after OpenAI unveiled ChatGPT. CEO Sundar Pichai went on to warn Google employees that the AI chatbot could threaten the future of search, which accounted for 56% of its revenue. Now, Altman is sounding alarm bells over Google’s latest Gemini 3 model and an increasingly fierce AI model race with Google, Elon Musk’s xAI, Meta Platforms and Anthropic. Altman urged OpenAI staffers “to stay focused through short-term competitive pressure … expect the vibes out there to be rough for a bit”, as a result of Google’s latest counterpunch.
Make no mistake, Gemini 3 is the real deal and is hurting everyone from OpenAI to xAI to Meta. Even before it unveiled Gemini 3, Google had gained significant traction following its release of an image-generation tool called Nano Banana in August, with over five billion images created. The Gemini 3 model builds on that as it is significantly more skilled at coding, developing applications and generating images than the other AI models out there. Analysts say Gemini 3’s strong benchmark results on multimodal reasoning, math and code have given it credibility as well as momentum. 
Gemini 3 outperforms ChatGPT-5.1, the latest version of OpenAI’s chatbot, which has real-time access to the web, enhanced reasoning, coding skills and image- and video- generation capabilities. Three months ago, OpenAI released its Sora video-generation app, which racked up a million users in five days.“Gemini 3’s biggest advantage is its sheer size and the vast amount of compute that went into creating it,” notes veteran tech strategist Ben Thompson, who writes the Stratechery blog. “OpenAI has had difficulty creating the next generation of models beyond the GPT-4 level of size and complexity,” he says. “What has carried the company is a genuine breakthrough in reasoning that produces better results.”
For now, however, ChatGPT remains ahead, though with the recent launch of Model 3, Gemini is closing the gap. As at end-November, ChatGPT had 800 million monthly active users compared with Gemini’s 650 million. 
ChatGPT’s more loyal users don’t use other AI chatbots. For its part, Gemini has been leveraging its integration with Google services including search, Gmail and YouTube to attract users who value multimodal features and an agentic smart assistant that can perform several step tasks across different tools. Google is betting that Gemini’s superior technical benchmarks will help it triumph over ChatGPT. In his “code red” memo, Altman mentioned that OpenAI was on track to releasing a new reasoning model soon that will likely be better than Google’s Gemini 3 while conceding that the company needs to make major improvements to the ChatGPT experience.
Before OpenAI unleashed its chatbot three years ago, Google was seen as the natural leader in AI research. Back in 2014, it bought control of the prestigious London-based AI research lab DeepMind, co-founded by Demis Hassabis and Syrian-British researcher Mustafa Suleyman, who went on to start Inflection AI and is now CEO of the Microsoft consumer AI unit. Among other things, DeepMind’s AlphaGo program beat the world champion at the ancient game of Go, taught itself chess and other complex games in hours, and cracked a 50-year-old scientific puzzle about how proteins fold. In 2023, DeepMind merged with Google Brain, the firm’s other AI unit, to form a single team that could take on the OpenAI challenge.
ChatGPT had the distinction of having the fastest mass adoption for a consumer product in history. Within five days of its release, one million users had already tried OpenAI’s chatbot. Within two months, ChatGPT had over 100 million users. It took Apple’s iPhone 2½ months to reach one million users and nearly 3½ years to reach 100 million global users. It took Instagram 2½ months to reach a million users and 2½ years to reach 100 million. It took Netflix over three years to reach a million users and 10 years to reach 100 million. By the way, it took more than nine years for the internet to reach 100 million users. And TV took literally decades to reach 100 million households around the world. 
Three years after the release of ChatGPT and the constant “cry wolf” of a bubble about to pop, AI is still alive, kicking and growing at warp speed. What’s changed, however, is the narrative around AI in recent months. Gone are the days when Nvidia was recognised as far and away the leader of the AI pack, supplying 90% of the chips used by companies like OpenAI, Microsoft, Meta, Amazon.com, Google, Oracle and Anthropic to train their large language models, and increasingly for inferencing and reasoning as well. Now there is a real race between AI chip firms Nvidia and Advanced Micro Devices as well as between them and their customers Google, Amazon and Meta that are getting customised AI chips designed by application specific integrated circuit (ASIC) firms such as Broadcom and Marvell Technology. None of the chip design firms have any factories or “fabs”. These “fabless” firms mostly use Taiwan Semiconductor Manufacturing or TSMC’s plants to make AI chips for them, though several are increasingly looking at Samsung Electronics and Intel to do some manufacturing for them.  
One way to look at the changing narrative around AI is to compare the valuation of various AI players the day before ChatGPT was released and their current market value. Nvidia had a market value of around US$390 billion in November 2022. That has since ballooned 11.5 times to US$4.4 trillion (RM18.01 trillion) over the past three years. Broadcom’s market value at end-November 2022 was US$230 billion. Last week, its market capitalisation touched US$1.91 trillion, or up 8.3 times in three years. Marvell’s market cap before the advent of ChatGPT was just US$30 billion; it is now valued at US$86.4 billion. Another huge gainer: Microsoft, which invested US$14 billion in OpenAI after ChatGPT was released. Back then, Microsoft had a market value of US$1.86 trillion. That peaked at US$4.1 trillion in October, though the stock is down nearly 14% to US$3.5 trillion. Oh, by the way, OpenAI, which was then a non-profit, was valued at US$14 billion before it released ChatGPT. Last month, the for-profit, unlisted OpenAI raised funds at a valuation of US$500 billion, making it the most valuable private firm on earth.
Clearly, Wall Street now sees Google and its parent Alphabet as the de facto leader in the AI race along with its chip partner Broadcom. Alphabet shares are up 125% from their April lows while Broadcom has surged 164% in the same period. Nvidia is up 94% from its own April lows, while Microsoft, which owns 24% of OpenAI, is up 35% from its early April lows. 
ChatGPT and the ongoing AI boom literally saved the US economy from the brink of a recession. In November 2022, the US Federal Reserve and central banks around the world were raising rates in what was the fastest rate hiking cycle in history, and global markets were in free fall. The arrival of ChatGPT changed the narrative. The US benchmark S&P 500 index is up 70% while the tech-heavy Nasdaq is up 124% over the past three years, before counting reinvested dividends. Despite all the babble about a looming AI bubble, the global AI market, which is expected to top US$400 billion this year, is projected to grow to US$5 trillion by 2033. America’s AI boom has unleashed an unprecedented surge in capital investment that is transforming data centres, construction industries, electric utilities and chip manufacturing across the US as well as AI start-ups. A recent Goldman Sachs research report forecasts AI will boost global productivity and potentially raise global gross domestic product (GDP) by 7% over the next decade. According to some estimates, AI-related infrastructure spending could account for nearly half of the GDP growth in the US this year. And the wealth effect from soaring AI and other tech stocks are estimated to boost US consumer spending by about US$180 billion this past year. 
Right now, all eyes are on the chip designers and the race to build an AI chip as good as Nvidia’s Blackwell Ultra or its next big chip, Rubin. A single Nvidia GB200 Superchip that includes both GPU (graphic processing unit) and CPU (central processing unit) can cost up to US$70,000. Of course, large customers buy chips as part of a package that includes Nvidia’s CUDA (Compute Unified Device Architecture) software as well as an array of services. Nvidia has gross margins of 73% on its Blackwell chips. Amazon’s founder Jeff Bezos once said, “Your margin is my opportunity”. Large customers of Nvidia, like Google, Amazon, Meta, xAI or OpenAI, see a huge opportunity. 
Last week, Amazon’s cloud unit released its own latest AI chip, Trainium3, in an attempt to sell hardware that could take Nvidia and Google-designed chips head-on. Amazon’s chip push is an important part of its strategy to regain its leadership role in tech. Despite being the dominant seller of rented computing power and data storage, Amazon Web Services or AWS has struggled to replicate that lead among top developers of AI tools, as some firms opt instead to rely on Google or Microsoft, which has close ties to OpenAI.
Amazon has said Trainium3 can power the intensive calculations behind AI models more cheaply and efficiently than Nvidia’s GPUs. While it hopes to entice companies with the Trainium3’s performance, the tech still lacks the deep software libraries that help customers get Nvidia’s GPUs up and running fast. On Dec 2, AWS also unveiled a new Amazon AI model that can process text, speech, images and video: Amazon Nova 2 Omni.
No wonder Google is designing its own in-house tensor processing units or TPUs with the help of Broadcom. TPUs are closely integrated with Google Cloud, the third largest cloud provider behind AWS and Microsoft’s Azure. They are also cost-effective due to vertical integration and proprietary hardware. Essentially, Google is tailor-making the chips for its own use. So, how do the new TPUs compare with Nvidia’s top-of-the-line chips? While the new TPUs will lead Blackwell chips on efficiency,  Nvidia’s Rubin will catch up next year, notes Pierre Ferragu, Newstreet’s tech analyst. Buying an ASIC chip is like getting a shirt made by a tailor. At the department store, you might get a good designer branded shirt, but only a custom tailor can make you one that fits perfectly.
So where does that leave Nvidia? The chip giant has  US$60 billion in net cash, a hoard that is likely to balloon to US$166 billion in the next fiscal year. While ASIC chips will slash its market share from the current 90% to more like 75%, its revenue and profits will continue to grow since the market for AI chips is still growing and Nvidia sells software and services with its chips. Cheaper ASIC chips will grow the overall market and add fuel to AI boom.
 
Assif Shameen is a technology and business writer based in North America 
Save by subscribing to us for your print and/or digital copy.
P/S: The Edge is also available on Apple’s App Store and Android’s Google Play.

Copyright © 1999-2025 The Edge Communications Sdn. Bhd. 199301012242 (266980-X). All rights reserved

source

Scroll to Top