How OpenAI Could Turn the Tables: 9 Questions to Answer

back to All

Google is back on top. Anthropic is charging ahead. And OpenAI is facing the toughest narrative moment it’s had in years.


Some skeptics argue that OpenAI’s moat is disappearing. Models are becoming commoditized. ChatGPT lacks true network effects. Google has the edge in traffic and compute. And in high-value enterprise tasks, Anthropic appears to be pulling ahead.


To be fair, these concerns aren’t unfounded. We’re only one month into 2026, and instead of stabilizing, the model landscape has grown even more competitive. For the first time since launching ChatGPT, OpenAI finds itself playing from behind.


Still, we remain optimistic. 2026 could be a turning point for OpenAI, but nine critical questions will determine how the story unfolds.



Question 1: Is Gemini Moving OpenAI’s Cheese?


OpenAI is feeling the impact of Gemini across three fronts: narrative, model performance, and traffic.


Narrative is where the impact is most visible.


Google’s resurgence has knocked OpenAI off the SOTA pedestal. More importantly, it’s shifted public perception. After 4o, OpenAI didn’t release a model with a dramatic leap in performance. The takeaway for many wasn’t that scaling had hit a wall, it was that OpenAI’s scaling had.


The market reaction was immediate. Since the release of Gemini 3, Google is up 20%, while SoftBank, often seen as a proxy for OpenAI exposure in public markets, is down 17%.


On the model side, OpenAI’s own missteps matter more than Gemini’s gains.


Rather than launching a new pre-training generation after 4o, OpenAI iterated on top of it, leaning heavily on RL and post-training improvements. Gemini 3 appears to have executed pre-training better than OpenAI this cycle, but it hasn’t delivered a true step-change. Meanwhile, OpenAI still leads in post-training and RL, and is actively addressing its pre-training gaps.


The more likely scenario from here: accelerated releases—Gemini 4, GPT-5.3, Opus 5—pushing Q1 into an intense benchmark race, with leadership alternating from model to model.


Until the next paradigm shift arrives, these back-and-forth wins may not mean much strategically. But the pressure is heavier on OpenAI. It lacks Google’s compute and infrastructure advantages, yet it’s simultaneously funding next-paradigm research, building the next generation of models, and serving one billion users.


However, Gemini 3 has had little impact so far. It has barely moved OpenAI’s API revenue or ChatGPT subscription revenue.


On the traffic front, OpenAI has already rebounded from its recent lows.


According to third-party tracking data, ChatGPT’s web traffic in January returned to pre-holiday levels, while mobile traffic surpassed them. On a longer time horizon, the post-holiday acceleration is even clearer.



  • ChatGPT maintains a stronger position on mobile. Gemini’s web traffic is about 27% of ChatGPT’s; on mobile, it’s roughly 16%.


  • Retention remains a key advantage. A meaningful share of Gemini users appear to be experimenting rather than sticking around. On mobile, ChatGPT’s DAU/MAU ratio is close to 45%, while Gemini’s is under 20%. ChatGPT’s memory feature has been critical to retention—though Gemini will almost certainly close that gap by rolling out its own memory capabilities.

  • Geographically, ChatGPT performs better in developed markets. Gemini’s share is rising faster in developing countries, suggesting it’s gaining traction through free tiers and Android distribution, capturing more price-sensitive users. Notably, ChatGPT trailed Gemini in India at one point, but overtook it in the second half of last year.



Question 2: Who Will Own Consumer and High-Value Tasks?


2026 is shaping up to be a year of intensified competition and an upgraded battlefield. The test is no longer just technological strength, it’s about strategically deciding where to allocate resources.


OpenAI and Google will compete head-to-head in consumer and advertising markets, while Anthropic, leveraging its consistent strategic focus, is gaining a first-mover advantage in high-value tasks like coding, agentic AI, and Excel automation.


That doesn’t mean OpenAI or Google will ignore high-value tasks. For example, in coding, they are certain to strike at some point. It’s just that the window for consumer competition is shorter, while high-value tasks allow for long-term positioning.


Anthropic has already demonstrated the ability to continuously innovate in high-value tasks. Whether this innovation will become a durable moat or merely pave the way for OpenAI and Google will become clearer over the course of this year.


Question 3: Is ChatGPT’s Growth Story Still Bright?


In the short term, Google can leverage a fully free strategy and its super-platforms, like its browser and search, to drive traffic to Gemini, which could slow ChatGPT’s growth. This is a luxury only a giant can afford. After all, Google has nine products with over one billion users each, while the second-place Meta has only three.


In the long term, we should be optimistic about ChatGPT’s growth. Chat and search are inevitably heading toward deep integration, and chat can handle far longer and more complex queries than search. The total volume of chat queries and usage frequency will eventually surpass search engines, meaning the user base could reach at least the same scale as search, around 5 billion monthly active users.


Currently, ChatGPT has roughly 1.2 billion MAU, and Gemini about 400 million, still far from the 5 billion mark. Even if their market share shifts from 4:1 to 1:1, ChatGPT still has room to double.


Assuming a 4:1 ratio and ChatGPT reaches 4 billion MAU, the implications are:


  • 10% high-value paying users – comparable in scale to Microsoft Office’s 400 million paying users. In high-value scenarios like coding, finance, and data analysis, if each user spends $200 per year, that’s $80 billion in ARR.


  • 90% free or low-cost users – monetized mainly through ads, e-commerce, etc. Assuming an ARPU of $25/year (roughly half of Meta’s global ARPU), this would generate $90 billion in revenue.


  • Healthcare opportunities – OpenAI is already focusing on this, unlocking new revenue streams. Unlike well-established office and advertising scenarios, healthcare is a supply-constrained, high-demand market. About 230 million users ask health-related questions on ChatGPT weekly. The U.S. healthcare market is roughly $6.5 trillion; capturing just 1% would yield $60 billion in revenue.


  • Other high-value scenarios – could bring even higher ARPU, potentially exceeding $200 per user per year. Areas like coding, drug discovery, AI for science, education, insurance, real estate transactions—any single scenario could generate massive value capture.


Optimistically, visible ARR for ChatGPT could reach $200 billion, with even larger upside potential beyond that. Conservatively, if ChatGPT and Gemini achieve a 1:1 ratio and reach 2.5 billion MAU, applying a 60% factor to the visible revenue estimate still leaves a huge upside potential.


Question 4: Chat Is Overtaking Search, Like Shorts Overtook Images


Every decade brings a wave of fundamental shifts in user behavior, often bigger than technological improvements themselves.

The move from search to chat mirrors the transition from image-and-text browsing to short-form video. The old format doesn’t disappear, but the new one delivers a tenfold stronger experience, hitting users in a completely different dimension.


There are several parallels between AI chat and short video:


  • Short videos significantly increased the time spent. TikTok users average over 90 minutes per day. AI chat similarly boosts query volumes, with token usage growing exponentially. Both improve the product’s understanding of user interest, intent, and context.


  • Instagram launched Reels three years after TikTok, and Google introduced AI mode in search three years after ChatGPT. The giants’ defensive responses are roughly aligned.


  • Both Meta and Google have strong ad bases and first-mover advantages. TikTok’s North American ad rates are still only half of Instagram’s. OpenAI has just announced ads, so there will be a ramp-up period.


Key differences include:


  • Search is a much larger market. Google Search earned roughly $300 billion in ad revenue last year, 1.5× Meta’s total ad revenue. Search ads are intent-driven and more efficient than Meta’s attention-based ads, making them both more lucrative and more threatened by AI.


  • Google faces an innovator’s dilemma. Adding short video to Instagram doesn’t threaten Meta’s core business. Adding AI mode to Google Search, however, reduces clicks on traditional search ads—where the top-ranked page usually gets a 40% CTR, while AI mode drops below 5%.


Currently, Google Search handles around 14 billion queries per day, while ChatGPT processes roughly 2.5 billion prompts per day (according to OpenAI data shared with Axios as of July 2025), already about 18% of Google’s volume. In terms of consumer intent, chat shows a clear advantage, many brands were already lining up even before OpenAI officially launched ads.


With shifts in both user behavior and ad models, Google faces a significant threat. The 2C battle between Gemini and ChatGPT in 2026 will be intense.


Question 5: OpenAI Has an Anthropic-Sized Enterprise Business


Although OpenAI emphasizes consumer products in its messaging and Anthropic leans toward Enterprise Business, OpenAI’s Enterprise Business has consistently been underestimated.


In 2025, OpenAI’s ARR was $20 billion (revenue $13 billion), with API revenue accounting for roughly 30%, about $6 billion. In the same year, Anthropic’s ARR was around $9 billion (revenue ~$4.5 billion), with 85% coming from coding and other Enterprise Business offerings; Claude Chat subscriptions made up only 15%.


At first glance, Anthropic’s Enterprise Business revenue seems larger. In reality, OpenAI’s Enterprise Business scale is at least comparable, if not bigger:


  • API: The two companies calculate API revenue differently. For APIs sold through third-party channels like Azure, OpenAI only counts 20% of the revenue, meaning the total sales value should be multiplied by five. Anthropic counts third-party sales directly as revenue, then deducts partner shares and inference costs. Assuming one-third of OpenAI’s API sales come through Azure, the total API sales would be roughly $8 billion.


  • ChatGPT Enterprise: Enterprise adoption is growing quickly. According to the Ramp Index, among the 50,000 companies they track, 36.8% pay for ChatGPT versus 16.7% for Anthropic, giving OpenAI a clear edge. At the large-enterprise level, clients like Accenture and ServiceNow work with both OpenAI and Anthropic, demonstrating OpenAI’s strong presence.


Combined, OpenAI’s API and ChatGPT Enterprise revenues account for about 40% of total revenue, roughly $5.2 billion, still larger than Anthropic’s total revenue of $4.5 billion, according to Information.



Recently, Sam promoted the API on X, noting that it added $1 billion in ARR in the past month. OpenAI is also increasing its focus on the enterprise side. With 2C facing intense competition, emphasizing the enterprise business is a natural strategic choice.



One more point: APIs are closely tied to the cloud and may even be reshaping the cloud landscape. Anthropic was previously the only company offering a SOTA model across Azure, AWS, and GCP, capturing advantages on the enterprise business and developer side. With OpenAI’s new funding round, Amazon is highly likely to participate, which could open up new opportunities for its enterprise business.


Question 6: OpenAI’s Next Edge: Memory and Proactive Agents


The three keywords for ChatGPT in 2026 are likely memory, proactive, and personalization, they’re both product and research challenges.

With model pre-training and RL already in the industrialized era, Google has advantages in engineering infrastructure and TPU compute. For OpenAI to break through, it must excel in memory and proactive agents.


Memory and proactive features were pioneered by OpenAI, but they’re not yet fully realized:


  • Memory: Current implementations are more engineering-driven. Key information is extracted from a user’s chat history and stored in a database for retrieval when needed. But this memory is mechanical, the model doesn’t deepen its understanding of the user over time, nor can it judge what’s important or worth remembering. OpenAI is both updating memory (recently improving recall functionality) and pursuing research breakthroughs to make memory smarter.


  • Proactive Agents: Still in an early stage, with only Pulse. The 2026 product roadmap released at Fidji clarifies that Pulse won’t just push information, it will act on behalf of the user.


  • Next-gen vision: In a December interview, Mark Chen said that the next ChatGPT shouldn’t rely on one-off Q&A interactions where each answer doesn’t improve. Instead, it should continue thinking in the background after a user asks a question, deepen its understanding, get smarter over time, and proactively provide better responses.


Personalization is closely tied to memory and continual learning. Language models can’t yet personalize or learn user preferences in real time like recommendation algorithms, but they have the potential to go orders of magnitude deeper. Neolabs and Thinking Machines Lab in the Bay Area, as well as the recently heavily funded startup Humans&, are exploring different technical paths to achieve personalization and continual learning—creating AI that improves through interaction with users.


This isn’t just OpenAI’s potential game-changer, it’s a high ground that any AI product must capture. Only by realizing memory and personalization can AI achieve a true “data flywheel.”


Question 7: Will OpenAI Lead the Next AI Paradigm?


In the past two paradigm shifts, model scaling and reasoning models, OpenAI led the way. It still has a strong chance of pioneering continual learning, widely recognized by top researchers in China and the U.S. as the next major paradigm.


If OpenAI hadn’t undergone organizational changes, its probability of leading the next paradigm would be the highest.


Historically, OpenAI struck an effective organizational balance, combining top-down coordination with bottom-up innovation. This allowed large-scale deployment of personnel for model training while encouraging grassroots innovation. Innovations like reasoning models and mini O3 emerged from the bottom up, and OpenAI consistently allocated ample compute to frontier research.

By comparison, Anthropic has limited resources and remains highly focused on coding and agentic AI. xAI started later and has been chasing SOTA models, leaving little bandwidth for exploratory research; Meta is similar.

Now, OpenAI has experienced multiple researcher departures and its focus has been split by commercialization and product demands. As a result, I estimate that OpenAI, Google, and Bay Area neolabs each have roughly a one-third chance of pioneering the next paradigm.

Google’s advantage lies in its dense talent pool and abundant resources, there’s always someone internally experimenting with something different.


Neolabs, including SSI, TML, Isara, Humans&, and Core Automation, have sprung up in the Bay Area, founded specifically to create the next paradigm. They are highly focused and include exceptional talents like Ilya Sutskever.

Even if OpenAI isn’t the first to create the next paradigm, once it emerges, the company has the ability to catch up quickly, and holds the strongest advantage in product integration.


Question 8: Can OpenAI’s Ads Drive Its Next Growth Spurt?


OpenAI’s paid subscription rate is currently around 5%, so advertising remains the most effective monetization method for consumer scenarios. Even Netflix focused heavily on ad monetization last year.


Current ads are priced on a CPM basis at roughly $60 per thousand impressions, comparable to top-tier video ads like those during the NFL. This likely reflects OpenAI’s confidence in ad targeting. Users can also interact directly with brands after seeing an ad, representing a new form of advertising innovation in the AI era.


Advertisers’ demand for ChatGPT is bound to be huge. In long conversations, users reveal far more intent, and LLMs excel at recognizing it. Combined, this creates a “gold mine” far richer and easier to tap than anything accumulated before.

The challenge is that mining this gold requires building a full advertising system, infrastructure, and ecosystem, an extremely complex task, likely different from traditional ad models. Within the next year, ad revenue may not scale significantly; early results will likely come from case studies and marketing.


Beyond advertising, ChatGPT’s bigger potential lies in e-commerce.


TikTok has already proven the power of an “ad platform + e-commerce loop” in China. In 2024, TikTok’s e-commerce GMV exceeded ¥3 trillion, creating a loop that makes its per-user value far higher than a pure ad platform.

By contrast, Google and Meta have both struggled to close the e-commerce loop. ChatGPT is pursuing a different path, and its progress is faster than commonly perceived.


Instant Checkout has already integrated with Shopify, with a 4% take rate, connecting over 1 million Shopify merchants. Etsy is live, and major retailers like Walmart are following. More importantly, OpenAI partnered with Stripe to launch the Agentic Commerce Protocol and chose to open-source it, signaling an attempt to set platform standards.


OpenAI’s goal by the end of 2027 is to generate $11 billion in annual revenue from non-paying users, primarily through ads and e-commerce.

Over a 3–5 year horizon, ChatGPT could become the first non-Amazon player to establish a fully internalized e-commerce ecosystem in the U.S. market. This potential far exceeds ad revenue alone—the ceiling for advertising is Google’s ~$300 billion, whereas global e-commerce GMV exceeds $6 trillion. A 4% take rate means every $100 billion GMV generates $4 billion in revenue.


Question 9: The Bear Case for OpenAI: The Next Yahoo?


One concern about OpenAI is that it has pioneered a new entry point for the LLM era, but does that mean it’s the final one? AI is still in its very early stages. If chatbots are disrupted by a completely new interaction mode, or if the next major entry point isn’t chat/information but agents, tasks, or entirely new hardware, could OpenAI fade like Yahoo?


The possibility exists, but it’s very low. Yahoo made two mistakes OpenAI is unlikely to repeat:


  • Underestimating search – Yahoo drove traffic to Google, essentially raising its future competitor.


  • Lacking in-house technical capability – By the time Yahoo realized search was critical, it was too late; acquiring a team couldn’t catch up.


Even so, Yahoo remained a top-tier internet company for a decade.


Today, information and talent flow is extremely transparent. No lab would underestimate a key technology or be foolish enough to feed a competitor. Perhaps a single product, like ChatGPT, could fade like Yahoo, but OpenAI itself will not. Nor will Google.


In fact, this may be the first time in Silicon Valley history that a startup challenges a giant, and the giant isn’t an elderly, rusty competitor scrambling to catch up, it’s a master swordsman, at the peak of his skill, who has spent the last decade forging a legendary blade. OpenAI is engaged in a hard-fought battle worthy of respect.


Appendix: Key Personnel Who Have Left OpenAI

<<< View All

Back To Top ↑

Labs

Fueling the Future, Navigator for Innovators

investment@etnalabs.co

(650) 668-4045

600 California St

San Francisco, CA 94108

Etna Capital Management

© 2025 All Rights Reserved

How OpenAI Could Turn the Tables: 9 Questions to Answer

Back to All

Google is back on top. Anthropic is charging ahead. And OpenAI is facing the toughest narrative moment it’s had in years.


Some skeptics argue that OpenAI’s moat is disappearing. Models are becoming commoditized. ChatGPT lacks true network effects. Google has the edge in traffic and compute. And in high-value enterprise tasks, Anthropic appears to be pulling ahead.


To be fair, these concerns aren’t unfounded. We’re only one month into 2026, and instead of stabilizing, the model landscape has grown even more competitive. For the first time since launching ChatGPT, OpenAI finds itself playing from behind.


Still, we remain optimistic. 2026 could be a turning point for OpenAI, but nine critical questions will determine how the story unfolds.



Question 1: Is Gemini Moving OpenAI’s Cheese?


OpenAI is feeling the impact of Gemini across three fronts: narrative, model performance, and traffic.


Narrative is where the impact is most visible.


Google’s resurgence has knocked OpenAI off the SOTA pedestal. More importantly, it’s shifted public perception. After 4o, OpenAI didn’t release a model with a dramatic leap in performance. The takeaway for many wasn’t that scaling had hit a wall, it was that OpenAI’s scaling had.


The market reaction was immediate. Since the release of Gemini 3, Google is up 20%, while SoftBank, often seen as a proxy for OpenAI exposure in public markets, is down 17%.


On the model side, OpenAI’s own missteps matter more than Gemini’s gains.


Rather than launching a new pre-training generation after 4o, OpenAI iterated on top of it, leaning heavily on RL and post-training improvements. Gemini 3 appears to have executed pre-training better than OpenAI this cycle, but it hasn’t delivered a true step-change. Meanwhile, OpenAI still leads in post-training and RL, and is actively addressing its pre-training gaps.


The more likely scenario from here: accelerated releases—Gemini 4, GPT-5.3, Opus 5—pushing Q1 into an intense benchmark race, with leadership alternating from model to model.


Until the next paradigm shift arrives, these back-and-forth wins may not mean much strategically. But the pressure is heavier on OpenAI. It lacks Google’s compute and infrastructure advantages, yet it’s simultaneously funding next-paradigm research, building the next generation of models, and serving one billion users.


However, Gemini 3 has had little impact so far. It has barely moved OpenAI’s API revenue or ChatGPT subscription revenue.


On the traffic front, OpenAI has already rebounded from its recent lows.


According to third-party tracking data, ChatGPT’s web traffic in January returned to pre-holiday levels, while mobile traffic surpassed them. On a longer time horizon, the post-holiday acceleration is even clearer.



  • ChatGPT maintains a stronger position on mobile. Gemini’s web traffic is about 27% of ChatGPT’s; on mobile, it’s roughly 16%.


  • Retention remains a key advantage. A meaningful share of Gemini users appear to be experimenting rather than sticking around. On mobile, ChatGPT’s DAU/MAU ratio is close to 45%, while Gemini’s is under 20%. ChatGPT’s memory feature has been critical to retention—though Gemini will almost certainly close that gap by rolling out its own memory capabilities.

  • Geographically, ChatGPT performs better in developed markets. Gemini’s share is rising faster in developing countries, suggesting it’s gaining traction through free tiers and Android distribution, capturing more price-sensitive users. Notably, ChatGPT trailed Gemini in India at one point, but overtook it in the second half of last year.



Question 2: Who Will Own Consumer and High-Value Tasks?


2026 is shaping up to be a year of intensified competition and an upgraded battlefield. The test is no longer just technological strength, it’s about strategically deciding where to allocate resources.


OpenAI and Google will compete head-to-head in consumer and advertising markets, while Anthropic, leveraging its consistent strategic focus, is gaining a first-mover advantage in high-value tasks like coding, agentic AI, and Excel automation.


That doesn’t mean OpenAI or Google will ignore high-value tasks. For example, in coding, they are certain to strike at some point. It’s just that the window for consumer competition is shorter, while high-value tasks allow for long-term positioning.


Anthropic has already demonstrated the ability to continuously innovate in high-value tasks. Whether this innovation will become a durable moat or merely pave the way for OpenAI and Google will become clearer over the course of this year.


Question 3: Is ChatGPT’s Growth Story Still Bright?


In the short term, Google can leverage a fully free strategy and its super-platforms, like its browser and search, to drive traffic to Gemini, which could slow ChatGPT’s growth. This is a luxury only a giant can afford. After all, Google has nine products with over one billion users each, while the second-place Meta has only three.


In the long term, we should be optimistic about ChatGPT’s growth. Chat and search are inevitably heading toward deep integration, and chat can handle far longer and more complex queries than search. The total volume of chat queries and usage frequency will eventually surpass search engines, meaning the user base could reach at least the same scale as search, around 5 billion monthly active users.


Currently, ChatGPT has roughly 1.2 billion MAU, and Gemini about 400 million, still far from the 5 billion mark. Even if their market share shifts from 4:1 to 1:1, ChatGPT still has room to double.


Assuming a 4:1 ratio and ChatGPT reaches 4 billion MAU, the implications are:


  • 10% high-value paying users – comparable in scale to Microsoft Office’s 400 million paying users. In high-value scenarios like coding, finance, and data analysis, if each user spends $200 per year, that’s $80 billion in ARR.


  • 90% free or low-cost users – monetized mainly through ads, e-commerce, etc. Assuming an ARPU of $25/year (roughly half of Meta’s global ARPU), this would generate $90 billion in revenue.


  • Healthcare opportunities – OpenAI is already focusing on this, unlocking new revenue streams. Unlike well-established office and advertising scenarios, healthcare is a supply-constrained, high-demand market. About 230 million users ask health-related questions on ChatGPT weekly. The U.S. healthcare market is roughly $6.5 trillion; capturing just 1% would yield $60 billion in revenue.


  • Other high-value scenarios – could bring even higher ARPU, potentially exceeding $200 per user per year. Areas like coding, drug discovery, AI for science, education, insurance, real estate transactions—any single scenario could generate massive value capture.


Optimistically, visible ARR for ChatGPT could reach $200 billion, with even larger upside potential beyond that. Conservatively, if ChatGPT and Gemini achieve a 1:1 ratio and reach 2.5 billion MAU, applying a 60% factor to the visible revenue estimate still leaves a huge upside potential.


Question 4: Chat Is Overtaking Search, Like Shorts Overtook Images


Every decade brings a wave of fundamental shifts in user behavior, often bigger than technological improvements themselves.

The move from search to chat mirrors the transition from image-and-text browsing to short-form video. The old format doesn’t disappear, but the new one delivers a tenfold stronger experience, hitting users in a completely different dimension.


There are several parallels between AI chat and short video:


  • Short videos significantly increased the time spent. TikTok users average over 90 minutes per day. AI chat similarly boosts query volumes, with token usage growing exponentially. Both improve the product’s understanding of user interest, intent, and context.


  • Instagram launched Reels three years after TikTok, and Google introduced AI mode in search three years after ChatGPT. The giants’ defensive responses are roughly aligned.


  • Both Meta and Google have strong ad bases and first-mover advantages. TikTok’s North American ad rates are still only half of Instagram’s. OpenAI has just announced ads, so there will be a ramp-up period.


Key differences include:


  • Search is a much larger market. Google Search earned roughly $300 billion in ad revenue last year, 1.5× Meta’s total ad revenue. Search ads are intent-driven and more efficient than Meta’s attention-based ads, making them both more lucrative and more threatened by AI.


  • Google faces an innovator’s dilemma. Adding short video to Instagram doesn’t threaten Meta’s core business. Adding AI mode to Google Search, however, reduces clicks on traditional search ads—where the top-ranked page usually gets a 40% CTR, while AI mode drops below 5%.


Currently, Google Search handles around 14 billion queries per day, while ChatGPT processes roughly 2.5 billion prompts per day (according to OpenAI data shared with Axios as of July 2025), already about 18% of Google’s volume. In terms of consumer intent, chat shows a clear advantage, many brands were already lining up even before OpenAI officially launched ads.


With shifts in both user behavior and ad models, Google faces a significant threat. The 2C battle between Gemini and ChatGPT in 2026 will be intense.


Question 5: OpenAI Has an Anthropic-Sized Enterprise Business


Although OpenAI emphasizes consumer products in its messaging and Anthropic leans toward Enterprise Business, OpenAI’s Enterprise Business has consistently been underestimated.


In 2025, OpenAI’s ARR was $20 billion (revenue $13 billion), with API revenue accounting for roughly 30%, about $6 billion. In the same year, Anthropic’s ARR was around $9 billion (revenue ~$4.5 billion), with 85% coming from coding and other Enterprise Business offerings; Claude Chat subscriptions made up only 15%.


At first glance, Anthropic’s Enterprise Business revenue seems larger. In reality, OpenAI’s Enterprise Business scale is at least comparable, if not bigger:


  • API: The two companies calculate API revenue differently. For APIs sold through third-party channels like Azure, OpenAI only counts 20% of the revenue, meaning the total sales value should be multiplied by five. Anthropic counts third-party sales directly as revenue, then deducts partner shares and inference costs. Assuming one-third of OpenAI’s API sales come through Azure, the total API sales would be roughly $8 billion.


  • ChatGPT Enterprise: Enterprise adoption is growing quickly. According to the Ramp Index, among the 50,000 companies they track, 36.8% pay for ChatGPT versus 16.7% for Anthropic, giving OpenAI a clear edge. At the large-enterprise level, clients like Accenture and ServiceNow work with both OpenAI and Anthropic, demonstrating OpenAI’s strong presence.


Combined, OpenAI’s API and ChatGPT Enterprise revenues account for about 40% of total revenue, roughly $5.2 billion, still larger than Anthropic’s total revenue of $4.5 billion, according to Information.



Recently, Sam promoted the API on X, noting that it added $1 billion in ARR in the past month. OpenAI is also increasing its focus on the enterprise side. With 2C facing intense competition, emphasizing the enterprise business is a natural strategic choice.



One more point: APIs are closely tied to the cloud and may even be reshaping the cloud landscape. Anthropic was previously the only company offering a SOTA model across Azure, AWS, and GCP, capturing advantages on the enterprise business and developer side. With OpenAI’s new funding round, Amazon is highly likely to participate, which could open up new opportunities for its enterprise business.


Question 6: OpenAI’s Next Edge: Memory and Proactive Agents


The three keywords for ChatGPT in 2026 are likely memory, proactive, and personalization, they’re both product and research challenges.

With model pre-training and RL already in the industrialized era, Google has advantages in engineering infrastructure and TPU compute. For OpenAI to break through, it must excel in memory and proactive agents.


Memory and proactive features were pioneered by OpenAI, but they’re not yet fully realized:


  • Memory: Current implementations are more engineering-driven. Key information is extracted from a user’s chat history and stored in a database for retrieval when needed. But this memory is mechanical, the model doesn’t deepen its understanding of the user over time, nor can it judge what’s important or worth remembering. OpenAI is both updating memory (recently improving recall functionality) and pursuing research breakthroughs to make memory smarter.


  • Proactive Agents: Still in an early stage, with only Pulse. The 2026 product roadmap released at Fidji clarifies that Pulse won’t just push information, it will act on behalf of the user.


  • Next-gen vision: In a December interview, Mark Chen said that the next ChatGPT shouldn’t rely on one-off Q&A interactions where each answer doesn’t improve. Instead, it should continue thinking in the background after a user asks a question, deepen its understanding, get smarter over time, and proactively provide better responses.


Personalization is closely tied to memory and continual learning. Language models can’t yet personalize or learn user preferences in real time like recommendation algorithms, but they have the potential to go orders of magnitude deeper. Neolabs and Thinking Machines Lab in the Bay Area, as well as the recently heavily funded startup Humans&, are exploring different technical paths to achieve personalization and continual learning—creating AI that improves through interaction with users.


This isn’t just OpenAI’s potential game-changer, it’s a high ground that any AI product must capture. Only by realizing memory and personalization can AI achieve a true “data flywheel.”


Question 7: Will OpenAI Lead the Next AI Paradigm?


In the past two paradigm shifts, model scaling and reasoning models, OpenAI led the way. It still has a strong chance of pioneering continual learning, widely recognized by top researchers in China and the U.S. as the next major paradigm.


If OpenAI hadn’t undergone organizational changes, its probability of leading the next paradigm would be the highest.


Historically, OpenAI struck an effective organizational balance, combining top-down coordination with bottom-up innovation. This allowed large-scale deployment of personnel for model training while encouraging grassroots innovation. Innovations like reasoning models and mini O3 emerged from the bottom up, and OpenAI consistently allocated ample compute to frontier research.

By comparison, Anthropic has limited resources and remains highly focused on coding and agentic AI. xAI started later and has been chasing SOTA models, leaving little bandwidth for exploratory research; Meta is similar.

Now, OpenAI has experienced multiple researcher departures and its focus has been split by commercialization and product demands. As a result, I estimate that OpenAI, Google, and Bay Area neolabs each have roughly a one-third chance of pioneering the next paradigm.

Google’s advantage lies in its dense talent pool and abundant resources, there’s always someone internally experimenting with something different.


Neolabs, including SSI, TML, Isara, Humans&, and Core Automation, have sprung up in the Bay Area, founded specifically to create the next paradigm. They are highly focused and include exceptional talents like Ilya Sutskever.

Even if OpenAI isn’t the first to create the next paradigm, once it emerges, the company has the ability to catch up quickly, and holds the strongest advantage in product integration.


Question 8: Can OpenAI’s Ads Drive Its Next Growth Spurt?


OpenAI’s paid subscription rate is currently around 5%, so advertising remains the most effective monetization method for consumer scenarios. Even Netflix focused heavily on ad monetization last year.


Current ads are priced on a CPM basis at roughly $60 per thousand impressions, comparable to top-tier video ads like those during the NFL. This likely reflects OpenAI’s confidence in ad targeting. Users can also interact directly with brands after seeing an ad, representing a new form of advertising innovation in the AI era.


Advertisers’ demand for ChatGPT is bound to be huge. In long conversations, users reveal far more intent, and LLMs excel at recognizing it. Combined, this creates a “gold mine” far richer and easier to tap than anything accumulated before.

The challenge is that mining this gold requires building a full advertising system, infrastructure, and ecosystem, an extremely complex task, likely different from traditional ad models. Within the next year, ad revenue may not scale significantly; early results will likely come from case studies and marketing.


Beyond advertising, ChatGPT’s bigger potential lies in e-commerce.


TikTok has already proven the power of an “ad platform + e-commerce loop” in China. In 2024, TikTok’s e-commerce GMV exceeded ¥3 trillion, creating a loop that makes its per-user value far higher than a pure ad platform.

By contrast, Google and Meta have both struggled to close the e-commerce loop. ChatGPT is pursuing a different path, and its progress is faster than commonly perceived.


Instant Checkout has already integrated with Shopify, with a 4% take rate, connecting over 1 million Shopify merchants. Etsy is live, and major retailers like Walmart are following. More importantly, OpenAI partnered with Stripe to launch the Agentic Commerce Protocol and chose to open-source it, signaling an attempt to set platform standards.


OpenAI’s goal by the end of 2027 is to generate $11 billion in annual revenue from non-paying users, primarily through ads and e-commerce.

Over a 3–5 year horizon, ChatGPT could become the first non-Amazon player to establish a fully internalized e-commerce ecosystem in the U.S. market. This potential far exceeds ad revenue alone—the ceiling for advertising is Google’s ~$300 billion, whereas global e-commerce GMV exceeds $6 trillion. A 4% take rate means every $100 billion GMV generates $4 billion in revenue.


Question 9: The Bear Case for OpenAI: The Next Yahoo?


One concern about OpenAI is that it has pioneered a new entry point for the LLM era, but does that mean it’s the final one? AI is still in its very early stages. If chatbots are disrupted by a completely new interaction mode, or if the next major entry point isn’t chat/information but agents, tasks, or entirely new hardware, could OpenAI fade like Yahoo?


The possibility exists, but it’s very low. Yahoo made two mistakes OpenAI is unlikely to repeat:


  • Underestimating search – Yahoo drove traffic to Google, essentially raising its future competitor.


  • Lacking in-house technical capability – By the time Yahoo realized search was critical, it was too late; acquiring a team couldn’t catch up.


Even so, Yahoo remained a top-tier internet company for a decade.


Today, information and talent flow is extremely transparent. No lab would underestimate a key technology or be foolish enough to feed a competitor. Perhaps a single product, like ChatGPT, could fade like Yahoo, but OpenAI itself will not. Nor will Google.


In fact, this may be the first time in Silicon Valley history that a startup challenges a giant, and the giant isn’t an elderly, rusty competitor scrambling to catch up, it’s a master swordsman, at the peak of his skill, who has spent the last decade forging a legendary blade. OpenAI is engaged in a hard-fought battle worthy of respect.


Appendix: Key Personnel Who Have Left OpenAI

<<< View All

Back To Top ↑

Labs

Fueling the Future, Navigator for Innovators

investment@etnalabs.co

(650) 668-4045

600 California St

San Francisco, CA 94108

Etna Capital Management

© 2025 All Rights Reserved

How OpenAI Could Turn the Tables: 9 Questions to Answer

Beck to All

Google is back on top. Anthropic is charging ahead. And OpenAI is facing the toughest narrative moment it’s had in years.


Some skeptics argue that OpenAI’s moat is disappearing. Models are becoming commoditized. ChatGPT lacks true network effects. Google has the edge in traffic and compute. And in high-value enterprise tasks, Anthropic appears to be pulling ahead.


To be fair, these concerns aren’t unfounded. We’re only one month into 2026, and instead of stabilizing, the model landscape has grown even more competitive. For the first time since launching ChatGPT, OpenAI finds itself playing from behind.


Still, we remain optimistic. 2026 could be a turning point for OpenAI, but nine critical questions will determine how the story unfolds.



Question 1: Is Gemini Moving OpenAI’s Cheese?


OpenAI is feeling the impact of Gemini across three fronts: narrative, model performance, and traffic.


Narrative is where the impact is most visible.


Google’s resurgence has knocked OpenAI off the SOTA pedestal. More importantly, it’s shifted public perception. After 4o, OpenAI didn’t release a model with a dramatic leap in performance. The takeaway for many wasn’t that scaling had hit a wall, it was that OpenAI’s scaling had.


The market reaction was immediate. Since the release of Gemini 3, Google is up 20%, while SoftBank, often seen as a proxy for OpenAI exposure in public markets, is down 17%.


On the model side, OpenAI’s own missteps matter more than Gemini’s gains.


Rather than launching a new pre-training generation after 4o, OpenAI iterated on top of it, leaning heavily on RL and post-training improvements. Gemini 3 appears to have executed pre-training better than OpenAI this cycle, but it hasn’t delivered a true step-change. Meanwhile, OpenAI still leads in post-training and RL, and is actively addressing its pre-training gaps.


The more likely scenario from here: accelerated releases—Gemini 4, GPT-5.3, Opus 5—pushing Q1 into an intense benchmark race, with leadership alternating from model to model.


Until the next paradigm shift arrives, these back-and-forth wins may not mean much strategically. But the pressure is heavier on OpenAI. It lacks Google’s compute and infrastructure advantages, yet it’s simultaneously funding next-paradigm research, building the next generation of models, and serving one billion users.


However, Gemini 3 has had little impact so far. It has barely moved OpenAI’s API revenue or ChatGPT subscription revenue.


On the traffic front, OpenAI has already rebounded from its recent lows.


According to third-party tracking data, ChatGPT’s web traffic in January returned to pre-holiday levels, while mobile traffic surpassed them. On a longer time horizon, the post-holiday acceleration is even clearer.



  • ChatGPT maintains a stronger position on mobile. Gemini’s web traffic is about 27% of ChatGPT’s; on mobile, it’s roughly 16%.


  • Retention remains a key advantage. A meaningful share of Gemini users appear to be experimenting rather than sticking around. On mobile, ChatGPT’s DAU/MAU ratio is close to 45%, while Gemini’s is under 20%. ChatGPT’s memory feature has been critical to retention—though Gemini will almost certainly close that gap by rolling out its own memory capabilities.

  • Geographically, ChatGPT performs better in developed markets. Gemini’s share is rising faster in developing countries, suggesting it’s gaining traction through free tiers and Android distribution, capturing more price-sensitive users. Notably, ChatGPT trailed Gemini in India at one point, but overtook it in the second half of last year.



Question 2: Who Will Own Consumer and High-Value Tasks?


2026 is shaping up to be a year of intensified competition and an upgraded battlefield. The test is no longer just technological strength, it’s about strategically deciding where to allocate resources.


OpenAI and Google will compete head-to-head in consumer and advertising markets, while Anthropic, leveraging its consistent strategic focus, is gaining a first-mover advantage in high-value tasks like coding, agentic AI, and Excel automation.


That doesn’t mean OpenAI or Google will ignore high-value tasks. For example, in coding, they are certain to strike at some point. It’s just that the window for consumer competition is shorter, while high-value tasks allow for long-term positioning.


Anthropic has already demonstrated the ability to continuously innovate in high-value tasks. Whether this innovation will become a durable moat or merely pave the way for OpenAI and Google will become clearer over the course of this year.


Question 3: Is ChatGPT’s Growth Story Still Bright?


In the short term, Google can leverage a fully free strategy and its super-platforms, like its browser and search, to drive traffic to Gemini, which could slow ChatGPT’s growth. This is a luxury only a giant can afford. After all, Google has nine products with over one billion users each, while the second-place Meta has only three.


In the long term, we should be optimistic about ChatGPT’s growth. Chat and search are inevitably heading toward deep integration, and chat can handle far longer and more complex queries than search. The total volume of chat queries and usage frequency will eventually surpass search engines, meaning the user base could reach at least the same scale as search, around 5 billion monthly active users.


Currently, ChatGPT has roughly 1.2 billion MAU, and Gemini about 400 million, still far from the 5 billion mark. Even if their market share shifts from 4:1 to 1:1, ChatGPT still has room to double.


Assuming a 4:1 ratio and ChatGPT reaches 4 billion MAU, the implications are:


  • 10% high-value paying users – comparable in scale to Microsoft Office’s 400 million paying users. In high-value scenarios like coding, finance, and data analysis, if each user spends $200 per year, that’s $80 billion in ARR.


  • 90% free or low-cost users – monetized mainly through ads, e-commerce, etc. Assuming an ARPU of $25/year (roughly half of Meta’s global ARPU), this would generate $90 billion in revenue.


  • Healthcare opportunities – OpenAI is already focusing on this, unlocking new revenue streams. Unlike well-established office and advertising scenarios, healthcare is a supply-constrained, high-demand market. About 230 million users ask health-related questions on ChatGPT weekly. The U.S. healthcare market is roughly $6.5 trillion; capturing just 1% would yield $60 billion in revenue.


  • Other high-value scenarios – could bring even higher ARPU, potentially exceeding $200 per user per year. Areas like coding, drug discovery, AI for science, education, insurance, real estate transactions—any single scenario could generate massive value capture.


Optimistically, visible ARR for ChatGPT could reach $200 billion, with even larger upside potential beyond that. Conservatively, if ChatGPT and Gemini achieve a 1:1 ratio and reach 2.5 billion MAU, applying a 60% factor to the visible revenue estimate still leaves a huge upside potential.


Question 4: Chat Is Overtaking Search, Like Shorts Overtook Images


Every decade brings a wave of fundamental shifts in user behavior, often bigger than technological improvements themselves.

The move from search to chat mirrors the transition from image-and-text browsing to short-form video. The old format doesn’t disappear, but the new one delivers a tenfold stronger experience, hitting users in a completely different dimension.


There are several parallels between AI chat and short video:


  • Short videos significantly increased the time spent. TikTok users average over 90 minutes per day. AI chat similarly boosts query volumes, with token usage growing exponentially. Both improve the product’s understanding of user interest, intent, and context.


  • Instagram launched Reels three years after TikTok, and Google introduced AI mode in search three years after ChatGPT. The giants’ defensive responses are roughly aligned.


  • Both Meta and Google have strong ad bases and first-mover advantages. TikTok’s North American ad rates are still only half of Instagram’s. OpenAI has just announced ads, so there will be a ramp-up period.


Key differences include:


  • Search is a much larger market. Google Search earned roughly $300 billion in ad revenue last year, 1.5× Meta’s total ad revenue. Search ads are intent-driven and more efficient than Meta’s attention-based ads, making them both more lucrative and more threatened by AI.


  • Google faces an innovator’s dilemma. Adding short video to Instagram doesn’t threaten Meta’s core business. Adding AI mode to Google Search, however, reduces clicks on traditional search ads—where the top-ranked page usually gets a 40% CTR, while AI mode drops below 5%.


Currently, Google Search handles around 14 billion queries per day, while ChatGPT processes roughly 2.5 billion prompts per day (according to OpenAI data shared with Axios as of July 2025), already about 18% of Google’s volume. In terms of consumer intent, chat shows a clear advantage, many brands were already lining up even before OpenAI officially launched ads.


With shifts in both user behavior and ad models, Google faces a significant threat. The 2C battle between Gemini and ChatGPT in 2026 will be intense.


Question 5: OpenAI Has an Anthropic-Sized Enterprise Business


Although OpenAI emphasizes consumer products in its messaging and Anthropic leans toward Enterprise Business, OpenAI’s Enterprise Business has consistently been underestimated.


In 2025, OpenAI’s ARR was $20 billion (revenue $13 billion), with API revenue accounting for roughly 30%, about $6 billion. In the same year, Anthropic’s ARR was around $9 billion (revenue ~$4.5 billion), with 85% coming from coding and other Enterprise Business offerings; Claude Chat subscriptions made up only 15%.


At first glance, Anthropic’s Enterprise Business revenue seems larger. In reality, OpenAI’s Enterprise Business scale is at least comparable, if not bigger:


  • API: The two companies calculate API revenue differently. For APIs sold through third-party channels like Azure, OpenAI only counts 20% of the revenue, meaning the total sales value should be multiplied by five. Anthropic counts third-party sales directly as revenue, then deducts partner shares and inference costs. Assuming one-third of OpenAI’s API sales come through Azure, the total API sales would be roughly $8 billion.


  • ChatGPT Enterprise: Enterprise adoption is growing quickly. According to the Ramp Index, among the 50,000 companies they track, 36.8% pay for ChatGPT versus 16.7% for Anthropic, giving OpenAI a clear edge. At the large-enterprise level, clients like Accenture and ServiceNow work with both OpenAI and Anthropic, demonstrating OpenAI’s strong presence.


Combined, OpenAI’s API and ChatGPT Enterprise revenues account for about 40% of total revenue, roughly $5.2 billion, still larger than Anthropic’s total revenue of $4.5 billion, according to Information.



Recently, Sam promoted the API on X, noting that it added $1 billion in ARR in the past month. OpenAI is also increasing its focus on the enterprise side. With 2C facing intense competition, emphasizing the enterprise business is a natural strategic choice.



One more point: APIs are closely tied to the cloud and may even be reshaping the cloud landscape. Anthropic was previously the only company offering a SOTA model across Azure, AWS, and GCP, capturing advantages on the enterprise business and developer side. With OpenAI’s new funding round, Amazon is highly likely to participate, which could open up new opportunities for its enterprise business.


Question 6: OpenAI’s Next Edge: Memory and Proactive Agents


The three keywords for ChatGPT in 2026 are likely memory, proactive, and personalization, they’re both product and research challenges.

With model pre-training and RL already in the industrialized era, Google has advantages in engineering infrastructure and TPU compute. For OpenAI to break through, it must excel in memory and proactive agents.


Memory and proactive features were pioneered by OpenAI, but they’re not yet fully realized:


  • Memory: Current implementations are more engineering-driven. Key information is extracted from a user’s chat history and stored in a database for retrieval when needed. But this memory is mechanical, the model doesn’t deepen its understanding of the user over time, nor can it judge what’s important or worth remembering. OpenAI is both updating memory (recently improving recall functionality) and pursuing research breakthroughs to make memory smarter.


  • Proactive Agents: Still in an early stage, with only Pulse. The 2026 product roadmap released at Fidji clarifies that Pulse won’t just push information, it will act on behalf of the user.


  • Next-gen vision: In a December interview, Mark Chen said that the next ChatGPT shouldn’t rely on one-off Q&A interactions where each answer doesn’t improve. Instead, it should continue thinking in the background after a user asks a question, deepen its understanding, get smarter over time, and proactively provide better responses.


Personalization is closely tied to memory and continual learning. Language models can’t yet personalize or learn user preferences in real time like recommendation algorithms, but they have the potential to go orders of magnitude deeper. Neolabs and Thinking Machines Lab in the Bay Area, as well as the recently heavily funded startup Humans&, are exploring different technical paths to achieve personalization and continual learning—creating AI that improves through interaction with users.


This isn’t just OpenAI’s potential game-changer, it’s a high ground that any AI product must capture. Only by realizing memory and personalization can AI achieve a true “data flywheel.”


Question 7: Will OpenAI Lead the Next AI Paradigm?


In the past two paradigm shifts, model scaling and reasoning models, OpenAI led the way. It still has a strong chance of pioneering continual learning, widely recognized by top researchers in China and the U.S. as the next major paradigm.


If OpenAI hadn’t undergone organizational changes, its probability of leading the next paradigm would be the highest.


Historically, OpenAI struck an effective organizational balance, combining top-down coordination with bottom-up innovation. This allowed large-scale deployment of personnel for model training while encouraging grassroots innovation. Innovations like reasoning models and mini O3 emerged from the bottom up, and OpenAI consistently allocated ample compute to frontier research.

By comparison, Anthropic has limited resources and remains highly focused on coding and agentic AI. xAI started later and has been chasing SOTA models, leaving little bandwidth for exploratory research; Meta is similar.

Now, OpenAI has experienced multiple researcher departures and its focus has been split by commercialization and product demands. As a result, I estimate that OpenAI, Google, and Bay Area neolabs each have roughly a one-third chance of pioneering the next paradigm.

Google’s advantage lies in its dense talent pool and abundant resources, there’s always someone internally experimenting with something different.


Neolabs, including SSI, TML, Isara, Humans&, and Core Automation, have sprung up in the Bay Area, founded specifically to create the next paradigm. They are highly focused and include exceptional talents like Ilya Sutskever.

Even if OpenAI isn’t the first to create the next paradigm, once it emerges, the company has the ability to catch up quickly, and holds the strongest advantage in product integration.


Question 8: Can OpenAI’s Ads Drive Its Next Growth Spurt?


OpenAI’s paid subscription rate is currently around 5%, so advertising remains the most effective monetization method for consumer scenarios. Even Netflix focused heavily on ad monetization last year.


Current ads are priced on a CPM basis at roughly $60 per thousand impressions, comparable to top-tier video ads like those during the NFL. This likely reflects OpenAI’s confidence in ad targeting. Users can also interact directly with brands after seeing an ad, representing a new form of advertising innovation in the AI era.


Advertisers’ demand for ChatGPT is bound to be huge. In long conversations, users reveal far more intent, and LLMs excel at recognizing it. Combined, this creates a “gold mine” far richer and easier to tap than anything accumulated before.

The challenge is that mining this gold requires building a full advertising system, infrastructure, and ecosystem, an extremely complex task, likely different from traditional ad models. Within the next year, ad revenue may not scale significantly; early results will likely come from case studies and marketing.


Beyond advertising, ChatGPT’s bigger potential lies in e-commerce.


TikTok has already proven the power of an “ad platform + e-commerce loop” in China. In 2024, TikTok’s e-commerce GMV exceeded ¥3 trillion, creating a loop that makes its per-user value far higher than a pure ad platform.

By contrast, Google and Meta have both struggled to close the e-commerce loop. ChatGPT is pursuing a different path, and its progress is faster than commonly perceived.


Instant Checkout has already integrated with Shopify, with a 4% take rate, connecting over 1 million Shopify merchants. Etsy is live, and major retailers like Walmart are following. More importantly, OpenAI partnered with Stripe to launch the Agentic Commerce Protocol and chose to open-source it, signaling an attempt to set platform standards.


OpenAI’s goal by the end of 2027 is to generate $11 billion in annual revenue from non-paying users, primarily through ads and e-commerce.

Over a 3–5 year horizon, ChatGPT could become the first non-Amazon player to establish a fully internalized e-commerce ecosystem in the U.S. market. This potential far exceeds ad revenue alone—the ceiling for advertising is Google’s ~$300 billion, whereas global e-commerce GMV exceeds $6 trillion. A 4% take rate means every $100 billion GMV generates $4 billion in revenue.


Question 9: The Bear Case for OpenAI: The Next Yahoo?


One concern about OpenAI is that it has pioneered a new entry point for the LLM era, but does that mean it’s the final one? AI is still in its very early stages. If chatbots are disrupted by a completely new interaction mode, or if the next major entry point isn’t chat/information but agents, tasks, or entirely new hardware, could OpenAI fade like Yahoo?


The possibility exists, but it’s very low. Yahoo made two mistakes OpenAI is unlikely to repeat:


  • Underestimating search – Yahoo drove traffic to Google, essentially raising its future competitor.


  • Lacking in-house technical capability – By the time Yahoo realized search was critical, it was too late; acquiring a team couldn’t catch up.


Even so, Yahoo remained a top-tier internet company for a decade.


Today, information and talent flow is extremely transparent. No lab would underestimate a key technology or be foolish enough to feed a competitor. Perhaps a single product, like ChatGPT, could fade like Yahoo, but OpenAI itself will not. Nor will Google.


In fact, this may be the first time in Silicon Valley history that a startup challenges a giant, and the giant isn’t an elderly, rusty competitor scrambling to catch up, it’s a master swordsman, at the peak of his skill, who has spent the last decade forging a legendary blade. OpenAI is engaged in a hard-fought battle worthy of respect.


Appendix: Key Personnel Who Have Left OpenAI

<<< View All

Back To Top ↑