In partnership with

Your Daily Best of AI™ News

🚨Tesla’s Autopilot/FSD marketing just took a real regulatory hit: a judge ruled Tesla engaged in deceptive marketing and opened the door to a potential 30-day suspension in California. If this sticks, it raises the bar for how “self-driving” can be sold—even when the product is really driver assistance.

The AI Insights Every Decision Maker Needs

You control budgets, manage pipelines, and make decisions, but you still have trouble keeping up with everything going on in AI. If that sounds like you, don’t worry, you’re not alone – and The Deep View is here to help.

This free, 5-minute-long daily newsletter covers everything you need to know about AI. The biggest developments, the most pressing issues, and how companies from Google and Meta to the hottest startups are using it to reshape their businesses… it’s all broken down for you each and every morning into easy-to-digest snippets.

If you want to up your AI knowledge and stay on the forefront of the industry, you can subscribe to The Deep View right here (it’s free!).

The Big Idea

OpenAI's moat is cracking. Google just proved it.

For two years, OpenAI has been the default answer to "which AI should I use?"

That assumption is breaking.

Google's Gemini 2.5 just matched — and in some cases exceeded — GPT-5 on the metrics that matter: speed, context length, multimodal capability, and cost. The gap that made OpenAI untouchable is now measured in decimal points, not orders of magnitude.

And the market is starting to notice.

What's actually happening

OpenAI still holds 38% of AI application market share. Google sits at 35%. That's not a landslide. That's a knife fight.

The technical gap has collapsed. Gemini 2.5 processes up to 1 million tokens of context — entire codebases, full books, massive multimodal datasets. GPT-5 can't match that. Google's inference speed is faster. Their pricing is more aggressive. Their multimodal capabilities are now rated best-in-class.

OpenAI released GPT-5 with 80% fewer hallucinations and better reasoning. It's a strong model. But it's no longer the obvious choice. For the first time since ChatGPT launched, developers are asking "which model should I use?" instead of defaulting to OpenAI.

The shift isn't about one model being better. It's about the end of OpenAI's monopoly on "best AI."

The three cracks nobody's talking about

[1] Distribution is eating mindshare

OpenAI built the best product. Google owns the distribution.

Gemini is embedded in Search, Gmail, Docs, Android, Chrome, and Google Cloud. Billions of users will interact with Gemini without ever choosing it. They'll just use Google products, and Gemini will be there.

OpenAI has to convince every user to download ChatGPT, create an account, and change their workflow. Google just has to flip a switch and Gemini is already in the tools people use every day.

The best product doesn't always win. The most accessible product does.

[2] The moat was speed, not technology

OpenAI's real advantage was never the models. It was shipping speed.

They launched ChatGPT before anyone else. They iterated faster than big companies could move. They built an ecosystem while competitors were still in meetings.

But Google is no longer slow. Gemini 2.5 shipped with features that took OpenAI months to build. The speed gap — the only sustainable moat OpenAI had — is closing.

When everyone has access to the same talent, the same compute, and the same research, speed is the only differentiator. And OpenAI is losing it.

[3] The market is fragmenting

OpenAI used to be the only game in town. Now it's one option among many.

Claude is better for reasoning and long-form content. Gemini is better for multimodal and cost-efficiency. Llama is better for open-source and customization. DeepSeek is better for specific research tasks.

The question isn't "should I use OpenAI?" It's "which AI is best for this specific task?"

That fragmentation is death for a platform company. OpenAI's value was being the universal default. Once users start choosing different models for different tasks, the lock-in disappears.

Why this matters more than you think

OpenAI's business model depends on being the default. They charge premium pricing because they're the best. But "best" is subjective when the gap is 2% on a benchmark.

Google can afford to lose money on Gemini for years. It's a strategic play to own AI distribution. OpenAI can't. They need revenue to justify their valuation and fund the next model generation.

The moment OpenAI stops being the obvious choice, their pricing power collapses. And their pricing power is the only thing keeping them ahead of open-source alternatives.

The uncomfortable question

What happens when "good enough" AI is free?

Google can give Gemini away for free because they monetize through ads, cloud services, and ecosystem lock-in. OpenAI can't. Their entire business is selling API access and ChatGPT subscriptions.

If Gemini reaches parity with GPT-5 and costs half as much — or is free for Google Workspace users — why would anyone pay OpenAI?

The answer: They won't. Unless OpenAI builds a moat that isn't just "slightly better models."

What this means

If you're building on OpenAI's API, you're betting on their ability to stay ahead. That bet is riskier than it was six months ago. Start building model-agnostic systems now.

If you're choosing an AI for your business, the default is no longer obvious. Evaluate based on your specific use case, not brand recognition.

If you're watching the AI market, the next 12 months will determine whether OpenAI becomes the next Google or the next Yahoo. They have the lead. But leads evaporate fast when distribution and economics shift.

What's next

Expect OpenAI to double down on ecosystem lock-in. Custom GPTs, memory, integrations — anything that makes switching painful.

Expect Google to keep undercutting on price while matching on capability. They don't need to be better. They just need to be good enough and everywhere.

Expect the market to fragment further. The era of "one AI to rule them all" is ending. The era of "best AI for each task" is beginning.

BTW: The real tell isn't the benchmarks. It's the conversations. Six months ago, everyone talked about ChatGPT. Now they're talking about "which model should I use?" That's not a technical shift. It's a market shift. And OpenAI is on the wrong side of it.

Create AI Ads From Start to Finish

Have an ad concept ready but don't want to deal with expensive shoots or stock footage? ScriptKit lets you generate, curate, and edit AI ads in one platform.

What ScriptKit gives you

  • Generate — Create images with multiple AI models (Nano Banana, Reve) and turn them into videos with Veo 3.1 or Sora 2 Pro. Get 3 variations per prompt.

  • Curate — Review all your generations in one place. Select your best assets, organize by scene, and build your storyboard.

  • Edit — Arrange clips on a timeline, add captions, adjust timing, and export your polished AI ad in multiple formats.

Give ScriptKit a shot — go from concept to finished AI ad without wrangling teams or gear.

Today’s Top Story

Amazon’s $10B chip lock-in play

The Recap: Amazon is reportedly in talks to invest $10B+ in OpenAI—with strings attached that would require OpenAI to use Amazon’s in-house AI chips. It’s the latest example of “circular deals,” where capital and compute commitments double as infrastructure lock-in.

Unpacked:

  • This is vendor financing as strategy: the money doesn’t just fund R&D, it underwrites future demand for a specific chip stack.

  • For Amazon, tying OpenAI to its silicon helps amortize chip R&D and pressures the Nvidia GPU default by forcing software optimization for Amazon hardware.

  • For OpenAI, the trade is optionality for supply: guaranteed capacity can matter more than theoretical multi-cloud freedom when training runs are bottlenecked.

  • These deals also blur market lines for regulators: when the buyer and seller are effectively the same entity (capital out, cloud revenue back), “competition” becomes harder to define.

Bottom line: The next phase of the AI race is less about model demos and more about who controls the compute supply chain. If frontier labs accept cash bundled with chip requirements, “best model” increasingly becomes “best model on this vendor’s stack.”

Other News

Luminar enters bankruptcy after a flagship automaker program unraveled, exposing how brittle lidar supplier economics can be.

X sues a startup trying to reclaim the “Twitter” trademark, arguing the rights never transferred despite the rebrand.

USTR threatens EU trade retaliation after X’s $140M DSA fine, turning tech compliance into geopolitical leverage.

Coursera combines with Udemy, accelerating consolidation of the learning layer that will retrain workers for automation.

Mozilla pivots away from Firefox toward AI products, fueling fears the last independent browser engine is being hollowed out.

AI mainstreams formal verification by lowering the cost of mathematical proof, nudging software from “ship it” to “prove it.”

GitHub reprices Actions in 2026, forcing teams to rerun CI/CD unit economics once dev tooling becomes rent-bearing infrastructure.

Developer ports a Python codebase to JavaScript in hours using GPT-5.2, pointing to LLMs as a cross-ecosystem translation layer.

AI Around The Web
Test Your AI Eye

Can You Spot The AI-Generated Image?

Select "Picture one", "Picture two", "Both", "None"

Login or Subscribe to participate

Prompt Of The Day

Copy and paste this prompt 👇

"I want you to act as an email marketing expert specializing in cold emails. My first suggestion request is to create a cold email template that will attract the attention of my [ideal customer persona] with a unique subject line and then persuade them to take [desired action] with a strong call-to-action and compelling visuals."

Best of AI™ Team

Was this email forwarded to you? Sign up here.

Keep Reading

No posts found