
How OpenAI Could Turn the Tables: 9 Questions to Answer
back to All
Google is back on top. Anthropic is charging ahead. And OpenAI is facing the toughest narrative moment it’s had in years.
Some skeptics argue that OpenAI’s moat is disappearing. Models are becoming commoditized. ChatGPT lacks true network effects. Google has the edge in traffic and compute. And in high-value enterprise tasks, Anthropic appears to be pulling ahead.
To be fair, these concerns aren’t unfounded. We’re only one month into 2026, and instead of stabilizing, the model landscape has grown even more competitive. For the first time since launching ChatGPT, OpenAI finds itself playing from behind.
Still, we remain optimistic. 2026 could be a turning point for OpenAI, but nine critical questions will determine how the story unfolds.
OpenAI is feeling the impact of Gemini across three fronts: narrative, model performance, and traffic.
Narrative is where the impact is most visible.
Google’s resurgence has knocked OpenAI off the SOTA pedestal. More importantly, it’s shifted public perception. After 4o, OpenAI didn’t release a model with a dramatic leap in performance. The takeaway for many wasn’t that scaling had hit a wall, it was that OpenAI’s scaling had.
The market reaction was immediate. Since the release of Gemini 3, Google is up 20%, while SoftBank, often seen as a proxy for OpenAI exposure in public markets, is down 17%.
On the model side, OpenAI’s own missteps matter more than Gemini’s gains.
Rather than launching a new pre-training generation after 4o, OpenAI iterated on top of it, leaning heavily on RL and post-training improvements. Gemini 3 appears to have executed pre-training better than OpenAI this cycle, but it hasn’t delivered a true step-change. Meanwhile, OpenAI still leads in post-training and RL, and is actively addressing its pre-training gaps.
The more likely scenario from here: accelerated releases—Gemini 4, GPT-5.3, Opus 5—pushing Q1 into an intense benchmark race, with leadership alternating from model to model.
Until the next paradigm shift arrives, these back-and-forth wins may not mean much strategically. But the pressure is heavier on OpenAI. It lacks Google’s compute and infrastructure advantages, yet it’s simultaneously funding next-paradigm research, building the next generation of models, and serving one billion users.
However, Gemini 3 has had little impact so far. It has barely moved OpenAI’s API revenue or ChatGPT subscription revenue.
On the traffic front, OpenAI has already rebounded from its recent lows.
According to third-party tracking data, ChatGPT’s web traffic in January returned to pre-holiday levels, while mobile traffic surpassed them. On a longer time horizon, the post-holiday acceleration is even clearer.



2026 is shaping up to be a year of intensified competition and an upgraded battlefield. The test is no longer just technological strength, it’s about strategically deciding where to allocate resources.
OpenAI and Google will compete head-to-head in consumer and advertising markets, while Anthropic, leveraging its consistent strategic focus, is gaining a first-mover advantage in high-value tasks like coding, agentic AI, and Excel automation.
That doesn’t mean OpenAI or Google will ignore high-value tasks. For example, in coding, they are certain to strike at some point. It’s just that the window for consumer competition is shorter, while high-value tasks allow for long-term positioning.
Anthropic has already demonstrated the ability to continuously innovate in high-value tasks. Whether this innovation will become a durable moat or merely pave the way for OpenAI and Google will become clearer over the course of this year.
In the short term, Google can leverage a fully free strategy and its super-platforms, like its browser and search, to drive traffic to Gemini, which could slow ChatGPT’s growth. This is a luxury only a giant can afford. After all, Google has nine products with over one billion users each, while the second-place Meta has only three.

In the long term, we should be optimistic about ChatGPT’s growth. Chat and search are inevitably heading toward deep integration, and chat can handle far longer and more complex queries than search. The total volume of chat queries and usage frequency will eventually surpass search engines, meaning the user base could reach at least the same scale as search, around 5 billion monthly active users.
Currently, ChatGPT has roughly 1.2 billion MAU, and Gemini about 400 million, still far from the 5 billion mark. Even if their market share shifts from 4:1 to 1:1, ChatGPT still has room to double.
Assuming a 4:1 ratio and ChatGPT reaches 4 billion MAU, the implications are:
Optimistically, visible ARR for ChatGPT could reach $200 billion, with even larger upside potential beyond that. Conservatively, if ChatGPT and Gemini achieve a 1:1 ratio and reach 2.5 billion MAU, applying a 60% factor to the visible revenue estimate still leaves a huge upside potential.
Every decade brings a wave of fundamental shifts in user behavior, often bigger than technological improvements themselves.
The move from search to chat mirrors the transition from image-and-text browsing to short-form video. The old format doesn’t disappear, but the new one delivers a tenfold stronger experience, hitting users in a completely different dimension.
There are several parallels between AI chat and short video:
Key differences include:
Currently, Google Search handles around 14 billion queries per day, while ChatGPT processes roughly 2.5 billion prompts per day (according to OpenAI data shared with Axios as of July 2025), already about 18% of Google’s volume. In terms of consumer intent, chat shows a clear advantage, many brands were already lining up even before OpenAI officially launched ads.
With shifts in both user behavior and ad models, Google faces a significant threat. The 2C battle between Gemini and ChatGPT in 2026 will be intense.
Although OpenAI emphasizes consumer products in its messaging and Anthropic leans toward Enterprise Business, OpenAI’s Enterprise Business has consistently been underestimated.
In 2025, OpenAI’s ARR was $20 billion (revenue $13 billion), with API revenue accounting for roughly 30%, about $6 billion. In the same year, Anthropic’s ARR was around $9 billion (revenue ~$4.5 billion), with 85% coming from coding and other Enterprise Business offerings; Claude Chat subscriptions made up only 15%.
At first glance, Anthropic’s Enterprise Business revenue seems larger. In reality, OpenAI’s Enterprise Business scale is at least comparable, if not bigger:
Combined, OpenAI’s API and ChatGPT Enterprise revenues account for about 40% of total revenue, roughly $5.2 billion, still larger than Anthropic’s total revenue of $4.5 billion, according to Information.

Recently, Sam promoted the API on X, noting that it added $1 billion in ARR in the past month. OpenAI is also increasing its focus on the enterprise side. With 2C facing intense competition, emphasizing the enterprise business is a natural strategic choice.

One more point: APIs are closely tied to the cloud and may even be reshaping the cloud landscape. Anthropic was previously the only company offering a SOTA model across Azure, AWS, and GCP, capturing advantages on the enterprise business and developer side. With OpenAI’s new funding round, Amazon is highly likely to participate, which could open up new opportunities for its enterprise business.
The three keywords for ChatGPT in 2026 are likely memory, proactive, and personalization, they’re both product and research challenges.
With model pre-training and RL already in the industrialized era, Google has advantages in engineering infrastructure and TPU compute. For OpenAI to break through, it must excel in memory and proactive agents.
Memory and proactive features were pioneered by OpenAI, but they’re not yet fully realized:
Personalization is closely tied to memory and continual learning. Language models can’t yet personalize or learn user preferences in real time like recommendation algorithms, but they have the potential to go orders of magnitude deeper. Neolabs and Thinking Machines Lab in the Bay Area, as well as the recently heavily funded startup Humans&, are exploring different technical paths to achieve personalization and continual learning—creating AI that improves through interaction with users.
This isn’t just OpenAI’s potential game-changer, it’s a high ground that any AI product must capture. Only by realizing memory and personalization can AI achieve a true “data flywheel.”
In the past two paradigm shifts, model scaling and reasoning models, OpenAI led the way. It still has a strong chance of pioneering continual learning, widely recognized by top researchers in China and the U.S. as the next major paradigm.
If OpenAI hadn’t undergone organizational changes, its probability of leading the next paradigm would be the highest.
Historically, OpenAI struck an effective organizational balance, combining top-down coordination with bottom-up innovation. This allowed large-scale deployment of personnel for model training while encouraging grassroots innovation. Innovations like reasoning models and mini O3 emerged from the bottom up, and OpenAI consistently allocated ample compute to frontier research.
By comparison, Anthropic has limited resources and remains highly focused on coding and agentic AI. xAI started later and has been chasing SOTA models, leaving little bandwidth for exploratory research; Meta is similar.
Now, OpenAI has experienced multiple researcher departures and its focus has been split by commercialization and product demands. As a result, I estimate that OpenAI, Google, and Bay Area neolabs each have roughly a one-third chance of pioneering the next paradigm.
Google’s advantage lies in its dense talent pool and abundant resources, there’s always someone internally experimenting with something different.
Neolabs, including SSI, TML, Isara, Humans&, and Core Automation, have sprung up in the Bay Area, founded specifically to create the next paradigm. They are highly focused and include exceptional talents like Ilya Sutskever.
Even if OpenAI isn’t the first to create the next paradigm, once it emerges, the company has the ability to catch up quickly, and holds the strongest advantage in product integration.
OpenAI’s paid subscription rate is currently around 5%, so advertising remains the most effective monetization method for consumer scenarios. Even Netflix focused heavily on ad monetization last year.
Current ads are priced on a CPM basis at roughly $60 per thousand impressions, comparable to top-tier video ads like those during the NFL. This likely reflects OpenAI’s confidence in ad targeting. Users can also interact directly with brands after seeing an ad, representing a new form of advertising innovation in the AI era.

Advertisers’ demand for ChatGPT is bound to be huge. In long conversations, users reveal far more intent, and LLMs excel at recognizing it. Combined, this creates a “gold mine” far richer and easier to tap than anything accumulated before.
The challenge is that mining this gold requires building a full advertising system, infrastructure, and ecosystem, an extremely complex task, likely different from traditional ad models. Within the next year, ad revenue may not scale significantly; early results will likely come from case studies and marketing.
Beyond advertising, ChatGPT’s bigger potential lies in e-commerce.
TikTok has already proven the power of an “ad platform + e-commerce loop” in China. In 2024, TikTok’s e-commerce GMV exceeded ¥3 trillion, creating a loop that makes its per-user value far higher than a pure ad platform.
By contrast, Google and Meta have both struggled to close the e-commerce loop. ChatGPT is pursuing a different path, and its progress is faster than commonly perceived.
Instant Checkout has already integrated with Shopify, with a 4% take rate, connecting over 1 million Shopify merchants. Etsy is live, and major retailers like Walmart are following. More importantly, OpenAI partnered with Stripe to launch the Agentic Commerce Protocol and chose to open-source it, signaling an attempt to set platform standards.
OpenAI’s goal by the end of 2027 is to generate $11 billion in annual revenue from non-paying users, primarily through ads and e-commerce.
Over a 3–5 year horizon, ChatGPT could become the first non-Amazon player to establish a fully internalized e-commerce ecosystem in the U.S. market. This potential far exceeds ad revenue alone—the ceiling for advertising is Google’s ~$300 billion, whereas global e-commerce GMV exceeds $6 trillion. A 4% take rate means every $100 billion GMV generates $4 billion in revenue.
One concern about OpenAI is that it has pioneered a new entry point for the LLM era, but does that mean it’s the final one? AI is still in its very early stages. If chatbots are disrupted by a completely new interaction mode, or if the next major entry point isn’t chat/information but agents, tasks, or entirely new hardware, could OpenAI fade like Yahoo?
The possibility exists, but it’s very low. Yahoo made two mistakes OpenAI is unlikely to repeat:
Even so, Yahoo remained a top-tier internet company for a decade.
Today, information and talent flow is extremely transparent. No lab would underestimate a key technology or be foolish enough to feed a competitor. Perhaps a single product, like ChatGPT, could fade like Yahoo, but OpenAI itself will not. Nor will Google.
In fact, this may be the first time in Silicon Valley history that a startup challenges a giant, and the giant isn’t an elderly, rusty competitor scrambling to catch up, it’s a master swordsman, at the peak of his skill, who has spent the last decade forging a legendary blade. OpenAI is engaged in a hard-fought battle worthy of respect.
<<< View All
Back To Top ↑

How OpenAI Could Turn the Tables: 9 Questions to Answer
Back to All
Google is back on top. Anthropic is charging ahead. And OpenAI is facing the toughest narrative moment it’s had in years.
Some skeptics argue that OpenAI’s moat is disappearing. Models are becoming commoditized. ChatGPT lacks true network effects. Google has the edge in traffic and compute. And in high-value enterprise tasks, Anthropic appears to be pulling ahead.
To be fair, these concerns aren’t unfounded. We’re only one month into 2026, and instead of stabilizing, the model landscape has grown even more competitive. For the first time since launching ChatGPT, OpenAI finds itself playing from behind.
Still, we remain optimistic. 2026 could be a turning point for OpenAI, but nine critical questions will determine how the story unfolds.
OpenAI is feeling the impact of Gemini across three fronts: narrative, model performance, and traffic.
Narrative is where the impact is most visible.
Google’s resurgence has knocked OpenAI off the SOTA pedestal. More importantly, it’s shifted public perception. After 4o, OpenAI didn’t release a model with a dramatic leap in performance. The takeaway for many wasn’t that scaling had hit a wall, it was that OpenAI’s scaling had.
The market reaction was immediate. Since the release of Gemini 3, Google is up 20%, while SoftBank, often seen as a proxy for OpenAI exposure in public markets, is down 17%.
On the model side, OpenAI’s own missteps matter more than Gemini’s gains.
Rather than launching a new pre-training generation after 4o, OpenAI iterated on top of it, leaning heavily on RL and post-training improvements. Gemini 3 appears to have executed pre-training better than OpenAI this cycle, but it hasn’t delivered a true step-change. Meanwhile, OpenAI still leads in post-training and RL, and is actively addressing its pre-training gaps.
The more likely scenario from here: accelerated releases—Gemini 4, GPT-5.3, Opus 5—pushing Q1 into an intense benchmark race, with leadership alternating from model to model.
Until the next paradigm shift arrives, these back-and-forth wins may not mean much strategically. But the pressure is heavier on OpenAI. It lacks Google’s compute and infrastructure advantages, yet it’s simultaneously funding next-paradigm research, building the next generation of models, and serving one billion users.
However, Gemini 3 has had little impact so far. It has barely moved OpenAI’s API revenue or ChatGPT subscription revenue.
On the traffic front, OpenAI has already rebounded from its recent lows.
According to third-party tracking data, ChatGPT’s web traffic in January returned to pre-holiday levels, while mobile traffic surpassed them. On a longer time horizon, the post-holiday acceleration is even clearer.



2026 is shaping up to be a year of intensified competition and an upgraded battlefield. The test is no longer just technological strength, it’s about strategically deciding where to allocate resources.
OpenAI and Google will compete head-to-head in consumer and advertising markets, while Anthropic, leveraging its consistent strategic focus, is gaining a first-mover advantage in high-value tasks like coding, agentic AI, and Excel automation.
That doesn’t mean OpenAI or Google will ignore high-value tasks. For example, in coding, they are certain to strike at some point. It’s just that the window for consumer competition is shorter, while high-value tasks allow for long-term positioning.
Anthropic has already demonstrated the ability to continuously innovate in high-value tasks. Whether this innovation will become a durable moat or merely pave the way for OpenAI and Google will become clearer over the course of this year.
In the short term, Google can leverage a fully free strategy and its super-platforms, like its browser and search, to drive traffic to Gemini, which could slow ChatGPT’s growth. This is a luxury only a giant can afford. After all, Google has nine products with over one billion users each, while the second-place Meta has only three.

In the long term, we should be optimistic about ChatGPT’s growth. Chat and search are inevitably heading toward deep integration, and chat can handle far longer and more complex queries than search. The total volume of chat queries and usage frequency will eventually surpass search engines, meaning the user base could reach at least the same scale as search, around 5 billion monthly active users.
Currently, ChatGPT has roughly 1.2 billion MAU, and Gemini about 400 million, still far from the 5 billion mark. Even if their market share shifts from 4:1 to 1:1, ChatGPT still has room to double.
Assuming a 4:1 ratio and ChatGPT reaches 4 billion MAU, the implications are:
Optimistically, visible ARR for ChatGPT could reach $200 billion, with even larger upside potential beyond that. Conservatively, if ChatGPT and Gemini achieve a 1:1 ratio and reach 2.5 billion MAU, applying a 60% factor to the visible revenue estimate still leaves a huge upside potential.
Every decade brings a wave of fundamental shifts in user behavior, often bigger than technological improvements themselves.
The move from search to chat mirrors the transition from image-and-text browsing to short-form video. The old format doesn’t disappear, but the new one delivers a tenfold stronger experience, hitting users in a completely different dimension.
There are several parallels between AI chat and short video:
Key differences include:
Currently, Google Search handles around 14 billion queries per day, while ChatGPT processes roughly 2.5 billion prompts per day (according to OpenAI data shared with Axios as of July 2025), already about 18% of Google’s volume. In terms of consumer intent, chat shows a clear advantage, many brands were already lining up even before OpenAI officially launched ads.
With shifts in both user behavior and ad models, Google faces a significant threat. The 2C battle between Gemini and ChatGPT in 2026 will be intense.
Although OpenAI emphasizes consumer products in its messaging and Anthropic leans toward Enterprise Business, OpenAI’s Enterprise Business has consistently been underestimated.
In 2025, OpenAI’s ARR was $20 billion (revenue $13 billion), with API revenue accounting for roughly 30%, about $6 billion. In the same year, Anthropic’s ARR was around $9 billion (revenue ~$4.5 billion), with 85% coming from coding and other Enterprise Business offerings; Claude Chat subscriptions made up only 15%.
At first glance, Anthropic’s Enterprise Business revenue seems larger. In reality, OpenAI’s Enterprise Business scale is at least comparable, if not bigger:
Combined, OpenAI’s API and ChatGPT Enterprise revenues account for about 40% of total revenue, roughly $5.2 billion, still larger than Anthropic’s total revenue of $4.5 billion, according to Information.

Recently, Sam promoted the API on X, noting that it added $1 billion in ARR in the past month. OpenAI is also increasing its focus on the enterprise side. With 2C facing intense competition, emphasizing the enterprise business is a natural strategic choice.

One more point: APIs are closely tied to the cloud and may even be reshaping the cloud landscape. Anthropic was previously the only company offering a SOTA model across Azure, AWS, and GCP, capturing advantages on the enterprise business and developer side. With OpenAI’s new funding round, Amazon is highly likely to participate, which could open up new opportunities for its enterprise business.
The three keywords for ChatGPT in 2026 are likely memory, proactive, and personalization, they’re both product and research challenges.
With model pre-training and RL already in the industrialized era, Google has advantages in engineering infrastructure and TPU compute. For OpenAI to break through, it must excel in memory and proactive agents.
Memory and proactive features were pioneered by OpenAI, but they’re not yet fully realized:
Personalization is closely tied to memory and continual learning. Language models can’t yet personalize or learn user preferences in real time like recommendation algorithms, but they have the potential to go orders of magnitude deeper. Neolabs and Thinking Machines Lab in the Bay Area, as well as the recently heavily funded startup Humans&, are exploring different technical paths to achieve personalization and continual learning—creating AI that improves through interaction with users.
This isn’t just OpenAI’s potential game-changer, it’s a high ground that any AI product must capture. Only by realizing memory and personalization can AI achieve a true “data flywheel.”
In the past two paradigm shifts, model scaling and reasoning models, OpenAI led the way. It still has a strong chance of pioneering continual learning, widely recognized by top researchers in China and the U.S. as the next major paradigm.
If OpenAI hadn’t undergone organizational changes, its probability of leading the next paradigm would be the highest.
Historically, OpenAI struck an effective organizational balance, combining top-down coordination with bottom-up innovation. This allowed large-scale deployment of personnel for model training while encouraging grassroots innovation. Innovations like reasoning models and mini O3 emerged from the bottom up, and OpenAI consistently allocated ample compute to frontier research.
By comparison, Anthropic has limited resources and remains highly focused on coding and agentic AI. xAI started later and has been chasing SOTA models, leaving little bandwidth for exploratory research; Meta is similar.
Now, OpenAI has experienced multiple researcher departures and its focus has been split by commercialization and product demands. As a result, I estimate that OpenAI, Google, and Bay Area neolabs each have roughly a one-third chance of pioneering the next paradigm.
Google’s advantage lies in its dense talent pool and abundant resources, there’s always someone internally experimenting with something different.
Neolabs, including SSI, TML, Isara, Humans&, and Core Automation, have sprung up in the Bay Area, founded specifically to create the next paradigm. They are highly focused and include exceptional talents like Ilya Sutskever.
Even if OpenAI isn’t the first to create the next paradigm, once it emerges, the company has the ability to catch up quickly, and holds the strongest advantage in product integration.
OpenAI’s paid subscription rate is currently around 5%, so advertising remains the most effective monetization method for consumer scenarios. Even Netflix focused heavily on ad monetization last year.
Current ads are priced on a CPM basis at roughly $60 per thousand impressions, comparable to top-tier video ads like those during the NFL. This likely reflects OpenAI’s confidence in ad targeting. Users can also interact directly with brands after seeing an ad, representing a new form of advertising innovation in the AI era.

Advertisers’ demand for ChatGPT is bound to be huge. In long conversations, users reveal far more intent, and LLMs excel at recognizing it. Combined, this creates a “gold mine” far richer and easier to tap than anything accumulated before.
The challenge is that mining this gold requires building a full advertising system, infrastructure, and ecosystem, an extremely complex task, likely different from traditional ad models. Within the next year, ad revenue may not scale significantly; early results will likely come from case studies and marketing.
Beyond advertising, ChatGPT’s bigger potential lies in e-commerce.
TikTok has already proven the power of an “ad platform + e-commerce loop” in China. In 2024, TikTok’s e-commerce GMV exceeded ¥3 trillion, creating a loop that makes its per-user value far higher than a pure ad platform.
By contrast, Google and Meta have both struggled to close the e-commerce loop. ChatGPT is pursuing a different path, and its progress is faster than commonly perceived.
Instant Checkout has already integrated with Shopify, with a 4% take rate, connecting over 1 million Shopify merchants. Etsy is live, and major retailers like Walmart are following. More importantly, OpenAI partnered with Stripe to launch the Agentic Commerce Protocol and chose to open-source it, signaling an attempt to set platform standards.
OpenAI’s goal by the end of 2027 is to generate $11 billion in annual revenue from non-paying users, primarily through ads and e-commerce.
Over a 3–5 year horizon, ChatGPT could become the first non-Amazon player to establish a fully internalized e-commerce ecosystem in the U.S. market. This potential far exceeds ad revenue alone—the ceiling for advertising is Google’s ~$300 billion, whereas global e-commerce GMV exceeds $6 trillion. A 4% take rate means every $100 billion GMV generates $4 billion in revenue.
One concern about OpenAI is that it has pioneered a new entry point for the LLM era, but does that mean it’s the final one? AI is still in its very early stages. If chatbots are disrupted by a completely new interaction mode, or if the next major entry point isn’t chat/information but agents, tasks, or entirely new hardware, could OpenAI fade like Yahoo?
The possibility exists, but it’s very low. Yahoo made two mistakes OpenAI is unlikely to repeat:
Even so, Yahoo remained a top-tier internet company for a decade.
Today, information and talent flow is extremely transparent. No lab would underestimate a key technology or be foolish enough to feed a competitor. Perhaps a single product, like ChatGPT, could fade like Yahoo, but OpenAI itself will not. Nor will Google.
In fact, this may be the first time in Silicon Valley history that a startup challenges a giant, and the giant isn’t an elderly, rusty competitor scrambling to catch up, it’s a master swordsman, at the peak of his skill, who has spent the last decade forging a legendary blade. OpenAI is engaged in a hard-fought battle worthy of respect.
<<< View All
Back To Top ↑

How OpenAI Could Turn the Tables: 9 Questions to Answer
Beck to All
Google is back on top. Anthropic is charging ahead. And OpenAI is facing the toughest narrative moment it’s had in years.
Some skeptics argue that OpenAI’s moat is disappearing. Models are becoming commoditized. ChatGPT lacks true network effects. Google has the edge in traffic and compute. And in high-value enterprise tasks, Anthropic appears to be pulling ahead.
To be fair, these concerns aren’t unfounded. We’re only one month into 2026, and instead of stabilizing, the model landscape has grown even more competitive. For the first time since launching ChatGPT, OpenAI finds itself playing from behind.
Still, we remain optimistic. 2026 could be a turning point for OpenAI, but nine critical questions will determine how the story unfolds.
OpenAI is feeling the impact of Gemini across three fronts: narrative, model performance, and traffic.
Narrative is where the impact is most visible.
Google’s resurgence has knocked OpenAI off the SOTA pedestal. More importantly, it’s shifted public perception. After 4o, OpenAI didn’t release a model with a dramatic leap in performance. The takeaway for many wasn’t that scaling had hit a wall, it was that OpenAI’s scaling had.
The market reaction was immediate. Since the release of Gemini 3, Google is up 20%, while SoftBank, often seen as a proxy for OpenAI exposure in public markets, is down 17%.
On the model side, OpenAI’s own missteps matter more than Gemini’s gains.
Rather than launching a new pre-training generation after 4o, OpenAI iterated on top of it, leaning heavily on RL and post-training improvements. Gemini 3 appears to have executed pre-training better than OpenAI this cycle, but it hasn’t delivered a true step-change. Meanwhile, OpenAI still leads in post-training and RL, and is actively addressing its pre-training gaps.
The more likely scenario from here: accelerated releases—Gemini 4, GPT-5.3, Opus 5—pushing Q1 into an intense benchmark race, with leadership alternating from model to model.
Until the next paradigm shift arrives, these back-and-forth wins may not mean much strategically. But the pressure is heavier on OpenAI. It lacks Google’s compute and infrastructure advantages, yet it’s simultaneously funding next-paradigm research, building the next generation of models, and serving one billion users.
However, Gemini 3 has had little impact so far. It has barely moved OpenAI’s API revenue or ChatGPT subscription revenue.
On the traffic front, OpenAI has already rebounded from its recent lows.
According to third-party tracking data, ChatGPT’s web traffic in January returned to pre-holiday levels, while mobile traffic surpassed them. On a longer time horizon, the post-holiday acceleration is even clearer.



2026 is shaping up to be a year of intensified competition and an upgraded battlefield. The test is no longer just technological strength, it’s about strategically deciding where to allocate resources.
OpenAI and Google will compete head-to-head in consumer and advertising markets, while Anthropic, leveraging its consistent strategic focus, is gaining a first-mover advantage in high-value tasks like coding, agentic AI, and Excel automation.
That doesn’t mean OpenAI or Google will ignore high-value tasks. For example, in coding, they are certain to strike at some point. It’s just that the window for consumer competition is shorter, while high-value tasks allow for long-term positioning.
Anthropic has already demonstrated the ability to continuously innovate in high-value tasks. Whether this innovation will become a durable moat or merely pave the way for OpenAI and Google will become clearer over the course of this year.
In the short term, Google can leverage a fully free strategy and its super-platforms, like its browser and search, to drive traffic to Gemini, which could slow ChatGPT’s growth. This is a luxury only a giant can afford. After all, Google has nine products with over one billion users each, while the second-place Meta has only three.

In the long term, we should be optimistic about ChatGPT’s growth. Chat and search are inevitably heading toward deep integration, and chat can handle far longer and more complex queries than search. The total volume of chat queries and usage frequency will eventually surpass search engines, meaning the user base could reach at least the same scale as search, around 5 billion monthly active users.
Currently, ChatGPT has roughly 1.2 billion MAU, and Gemini about 400 million, still far from the 5 billion mark. Even if their market share shifts from 4:1 to 1:1, ChatGPT still has room to double.
Assuming a 4:1 ratio and ChatGPT reaches 4 billion MAU, the implications are:
Optimistically, visible ARR for ChatGPT could reach $200 billion, with even larger upside potential beyond that. Conservatively, if ChatGPT and Gemini achieve a 1:1 ratio and reach 2.5 billion MAU, applying a 60% factor to the visible revenue estimate still leaves a huge upside potential.
Every decade brings a wave of fundamental shifts in user behavior, often bigger than technological improvements themselves.
The move from search to chat mirrors the transition from image-and-text browsing to short-form video. The old format doesn’t disappear, but the new one delivers a tenfold stronger experience, hitting users in a completely different dimension.
There are several parallels between AI chat and short video:
Key differences include:
Currently, Google Search handles around 14 billion queries per day, while ChatGPT processes roughly 2.5 billion prompts per day (according to OpenAI data shared with Axios as of July 2025), already about 18% of Google’s volume. In terms of consumer intent, chat shows a clear advantage, many brands were already lining up even before OpenAI officially launched ads.
With shifts in both user behavior and ad models, Google faces a significant threat. The 2C battle between Gemini and ChatGPT in 2026 will be intense.
Although OpenAI emphasizes consumer products in its messaging and Anthropic leans toward Enterprise Business, OpenAI’s Enterprise Business has consistently been underestimated.
In 2025, OpenAI’s ARR was $20 billion (revenue $13 billion), with API revenue accounting for roughly 30%, about $6 billion. In the same year, Anthropic’s ARR was around $9 billion (revenue ~$4.5 billion), with 85% coming from coding and other Enterprise Business offerings; Claude Chat subscriptions made up only 15%.
At first glance, Anthropic’s Enterprise Business revenue seems larger. In reality, OpenAI’s Enterprise Business scale is at least comparable, if not bigger:
Combined, OpenAI’s API and ChatGPT Enterprise revenues account for about 40% of total revenue, roughly $5.2 billion, still larger than Anthropic’s total revenue of $4.5 billion, according to Information.

Recently, Sam promoted the API on X, noting that it added $1 billion in ARR in the past month. OpenAI is also increasing its focus on the enterprise side. With 2C facing intense competition, emphasizing the enterprise business is a natural strategic choice.

One more point: APIs are closely tied to the cloud and may even be reshaping the cloud landscape. Anthropic was previously the only company offering a SOTA model across Azure, AWS, and GCP, capturing advantages on the enterprise business and developer side. With OpenAI’s new funding round, Amazon is highly likely to participate, which could open up new opportunities for its enterprise business.
The three keywords for ChatGPT in 2026 are likely memory, proactive, and personalization, they’re both product and research challenges.
With model pre-training and RL already in the industrialized era, Google has advantages in engineering infrastructure and TPU compute. For OpenAI to break through, it must excel in memory and proactive agents.
Memory and proactive features were pioneered by OpenAI, but they’re not yet fully realized:
Personalization is closely tied to memory and continual learning. Language models can’t yet personalize or learn user preferences in real time like recommendation algorithms, but they have the potential to go orders of magnitude deeper. Neolabs and Thinking Machines Lab in the Bay Area, as well as the recently heavily funded startup Humans&, are exploring different technical paths to achieve personalization and continual learning—creating AI that improves through interaction with users.
This isn’t just OpenAI’s potential game-changer, it’s a high ground that any AI product must capture. Only by realizing memory and personalization can AI achieve a true “data flywheel.”
In the past two paradigm shifts, model scaling and reasoning models, OpenAI led the way. It still has a strong chance of pioneering continual learning, widely recognized by top researchers in China and the U.S. as the next major paradigm.
If OpenAI hadn’t undergone organizational changes, its probability of leading the next paradigm would be the highest.
Historically, OpenAI struck an effective organizational balance, combining top-down coordination with bottom-up innovation. This allowed large-scale deployment of personnel for model training while encouraging grassroots innovation. Innovations like reasoning models and mini O3 emerged from the bottom up, and OpenAI consistently allocated ample compute to frontier research.
By comparison, Anthropic has limited resources and remains highly focused on coding and agentic AI. xAI started later and has been chasing SOTA models, leaving little bandwidth for exploratory research; Meta is similar.
Now, OpenAI has experienced multiple researcher departures and its focus has been split by commercialization and product demands. As a result, I estimate that OpenAI, Google, and Bay Area neolabs each have roughly a one-third chance of pioneering the next paradigm.
Google’s advantage lies in its dense talent pool and abundant resources, there’s always someone internally experimenting with something different.
Neolabs, including SSI, TML, Isara, Humans&, and Core Automation, have sprung up in the Bay Area, founded specifically to create the next paradigm. They are highly focused and include exceptional talents like Ilya Sutskever.
Even if OpenAI isn’t the first to create the next paradigm, once it emerges, the company has the ability to catch up quickly, and holds the strongest advantage in product integration.
OpenAI’s paid subscription rate is currently around 5%, so advertising remains the most effective monetization method for consumer scenarios. Even Netflix focused heavily on ad monetization last year.
Current ads are priced on a CPM basis at roughly $60 per thousand impressions, comparable to top-tier video ads like those during the NFL. This likely reflects OpenAI’s confidence in ad targeting. Users can also interact directly with brands after seeing an ad, representing a new form of advertising innovation in the AI era.

Advertisers’ demand for ChatGPT is bound to be huge. In long conversations, users reveal far more intent, and LLMs excel at recognizing it. Combined, this creates a “gold mine” far richer and easier to tap than anything accumulated before.
The challenge is that mining this gold requires building a full advertising system, infrastructure, and ecosystem, an extremely complex task, likely different from traditional ad models. Within the next year, ad revenue may not scale significantly; early results will likely come from case studies and marketing.
Beyond advertising, ChatGPT’s bigger potential lies in e-commerce.
TikTok has already proven the power of an “ad platform + e-commerce loop” in China. In 2024, TikTok’s e-commerce GMV exceeded ¥3 trillion, creating a loop that makes its per-user value far higher than a pure ad platform.
By contrast, Google and Meta have both struggled to close the e-commerce loop. ChatGPT is pursuing a different path, and its progress is faster than commonly perceived.
Instant Checkout has already integrated with Shopify, with a 4% take rate, connecting over 1 million Shopify merchants. Etsy is live, and major retailers like Walmart are following. More importantly, OpenAI partnered with Stripe to launch the Agentic Commerce Protocol and chose to open-source it, signaling an attempt to set platform standards.
OpenAI’s goal by the end of 2027 is to generate $11 billion in annual revenue from non-paying users, primarily through ads and e-commerce.
Over a 3–5 year horizon, ChatGPT could become the first non-Amazon player to establish a fully internalized e-commerce ecosystem in the U.S. market. This potential far exceeds ad revenue alone—the ceiling for advertising is Google’s ~$300 billion, whereas global e-commerce GMV exceeds $6 trillion. A 4% take rate means every $100 billion GMV generates $4 billion in revenue.
One concern about OpenAI is that it has pioneered a new entry point for the LLM era, but does that mean it’s the final one? AI is still in its very early stages. If chatbots are disrupted by a completely new interaction mode, or if the next major entry point isn’t chat/information but agents, tasks, or entirely new hardware, could OpenAI fade like Yahoo?
The possibility exists, but it’s very low. Yahoo made two mistakes OpenAI is unlikely to repeat:
Even so, Yahoo remained a top-tier internet company for a decade.
Today, information and talent flow is extremely transparent. No lab would underestimate a key technology or be foolish enough to feed a competitor. Perhaps a single product, like ChatGPT, could fade like Yahoo, but OpenAI itself will not. Nor will Google.
In fact, this may be the first time in Silicon Valley history that a startup challenges a giant, and the giant isn’t an elderly, rusty competitor scrambling to catch up, it’s a master swordsman, at the peak of his skill, who has spent the last decade forging a legendary blade. OpenAI is engaged in a hard-fought battle worthy of respect.
<<< View All
Back To Top ↑