In partnership with

Your Daily Best of AI™ News

🚨OpenAI committed over $10 billion to Cerebras for 750 megawatts of compute capacity through 2028, marking the ChatGPT maker's most aggressive diversification away from Nvidia as the capital requirements for AI scaling reach infrastructure-defining levels.

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

The Big Idea

The AI Index: Why Smart Businesses Are Rewriting Their Websites for Robot Readers

Google isn't dead. But the way people search is changing so fast, your website might be invisible to the only audience that matters in 2026: AI agents.

In 2026, the bulk of "customers" visiting your website to learn about and even purchase products won't be people — they'll be AI agents. 58.5% of U.S. Google searches now end in zero clicks, as AI-generated responses satisfy user intent instantly. And this isn't a passing trend.

The shift is measurable: ChatGPT serves 800 million users each week. Perplexity processed 780 million queries in a single month. The AI search engine market, valued at $43.6 billion in 2024, is projected to capture 62.2% of total search volume by 2030, with revenues nearing $379 billion.

How it works:

Unlike traditional SEO—where you optimize for keywords and blue links—AI-friendly websites are optimized for citations. If you aren't mentioned in AI answers, you're effectively invisible to potential consumers, regardless of how good your content is.

AI agents are becoming the primary consumers of brand content. They crawl, evaluate, and prioritize content for users, serving as an interlocutor between consumers and brands and ultimately deciding what humans actually see.

This new discipline is called Large Language Model Optimization (LLMO) or Generative Engine Optimization (GEO). LLMO is the practice of optimizing your content, website, and brand presence to appear in AI-generated responses from tools like ChatGPT Search, Google's AI Overviews, and Perplexity.

The mechanics are straightforward but require a fundamental rethinking of how you structure content:

1. Speed matters more than ever. Maintain server response times <200ms. For LLM crawlers (GPTBot, Google-Extended), retrieval windows are even tighter than traditional search. Slow sites literally miss the window to be considered.

2. Structure trumps style. Pages that use clear H2/H3/bullet point structures are 40% more likely to be cited by AI engines. Q&A formats perform best for GEO because they closely match how users ask questions.

3. Summaries are your secret weapon. This is the big shift: websites now need AI-readable summaries at the top of key pages. Think of it as writing an executive summary for a robot reader who has milliseconds to decide if your content is citation-worthy.

Strategic text sequences—deliberately crafted phrasing and ordering of information—are critical. Strategic sequences help surface key information early, improving salience in AI-generated summaries.

4. Answer-first architecture. It's all about providing clear, direct answers to specific questions. This format mirrors how Perplexity presents information, making it more likely for your content to be cited as a source.

5. Fresh data wins. Content updated within the last 30 days earns 3.2x more citations. A study shows that the most referred websites contains quotes, citations and statistics in their content. Seeing up to 30-40% more visibility in LLMs by simply strengthen their credibility.

What makes this different:

Traditional SEO focused on getting you the top result users click on. GEO ensures you become the trusted source that AI confidently quotes when users ask questions.

Citation authority replaces backlinks, and visibility score matters more than rank. It's about reference rates — how often AI models cite your brand or content.

The competition is fierce. LLMs cite just 2–7 domains per response on average, far fewer than the 10 blue links in traditional search.

The business case:

Early movers are seeing dramatic results. Logikcull, a software company, was surprised when more and more new customers said they had found out about them through ChatGPT. As early as June 2023, five percent of all leads for Logikcull came through ChatGPT. That's the equivalent of nearly $100,000 in monthly subscription revenue.

AI search visitors convert 4.4x better than traditional organic search visitors, Semrush research shows. And LLM traffic channels are projected to drive as much business value as traditional search by 2027.

Rock The Rankings shared a case: it took them over 12 months to rank in Google's top 3 for a key keyword, but they managed to crack ChatGPT's SearchGPT ranking in a matter of days. This implies that competition on AI platforms is still lower and agility can yield quick wins.

The technical foundation:

Getting started requires some infrastructure changes:

- Allow AI crawlers. Double-check your robots.txt to ensure you're not disallowing known LLM user agents (e.g., GPTBot for OpenAI, ClaudeBot for Anthropic). A surprising number of sites are still blocking these bots.

- Implement schema markup. Proper Article and FAQ schema increases AI citations by 28%. Schema markup helps AI systems understand what your content is, who created it, and how each element connects.

- Conversational language. Your content needs to be structured in a conversational way as people would expect to get in a normal conversation. This will increase your chances of getting featured tremendously.

What's next:

2026 is the year search looks less like search and more like a conversation. As the search landscape fragments between Google, ChatGPT, Perplexity, Claude, Copilot, and more, AI assistants are taking over the front door to brands.

The brands that align their website content with the real, deep human intent driving their consumers to purchase will be the ones AI platforms mention, cite, and recommend.

The window for easy wins is closing. Early movers capture lasting visibility. As more businesses wake up to this shift, competition for those 2-7 citation slots will intensify.

The good news? The principles that improve visibility in LLMs—such as structure, clarity, and authority—also benefit your rankings in Google. As Google rolls out more AI-generated results through its Search Generative Experience (SGE), these tactics will overlap even more.

Think of it this way: Your website as a well-organized library—clear labeling, consistent categorization, and reliable content help both humans and AI find exactly what they need.

BTW: Some businesses are taking this a step further by creating llms.txt files—a machine-readable index specifically designed to help AI agents navigate their websites. It's like a sitemap, but for robots who actually read your content instead of just indexing it.

Create AI Ads From Start to Finish

Have an ad concept ready but don't want to deal with expensive shoots or stock footage? ScriptKit lets you generate, curate, and edit AI ads in one platform.

What ScriptKit gives you

  • Generate — Create images with multiple AI models (Nano Banana, Reve) and turn them into videos with Veo 3.1 or Sora 2 Pro. Get 3 variations per prompt.

  • Curate — Review all your generations in one place. Select your best assets, organize by scene, and build your storyboard.

  • Edit — Arrange clips on a timeline, add captions, adjust timing, and export your polished AI ad in multiple formats.

Give ScriptKit a shot — go from concept to finished AI ad without wrangling teams or gear.

Today’s Top Story

Wikipedia's $65M content monetization as OpenAI commits $10B to Cerebras

The Recap: Wikipedia announced partnerships with Amazon, Meta, Microsoft, Mistral AI, and Perplexity through its Wikimedia Enterprise product, marking the organization's 25th birthday by transforming from an open knowledge commons into a monetized infrastructure layer for AI training. Wikipedia's 65 million articles across over 300 languages are crucial training data for generative AI chatbots, and the foundation reported human traffic fell 8% while bot visits heavily taxed servers—forcing a choice between sustaining free access and extracting value from tech giants scraping at scale.

Unpacked:

  • The shift moves tech firms from scraping Wikipedia for free to using Wikimedia Enterprise, a commercial product tailored for large-scale data needs, fundamentally rewriting the economics of the open web. Wikipedia spent two decades building authority as a nonprofit, and now monetizes that trust precisely when AI companies need credible training data to avoid lawsuit-prone copyrighted content. The timing isn't coincidental—it's strategic leverage.

  • Jimmy Wales said "I'm very happy personally that AI models are training on Wikipedia" and noted the site wants to work with AI companies to "chip in and pay for your fair share", revealing that even ideological commitment to free knowledge bends when server costs exceed donation revenue. The framing as "partnership" obscures the reality: Wikipedia now operates a toll booth on what was previously a free highway, because bots disguised to evade detection were crushing infrastructure designed for humans.

  • The customer list reads like AI's who's-who—Google (announced 2022), Amazon, Meta, Microsoft, Perplexity, Mistral AI—but conspicuously excludes Anthropic and smaller model builders, suggesting either selective partnerships or that some players haven't yet agreed to pay. These companies can access content at volume and speed designed for their needs while supporting Wikipedia's nonprofit mission, which translates to: pay us or face rate limiting and legal pressure as regulatory scrutiny on scraping intensifies.

  • The competitive implications cut both ways. Paying for Wikipedia access creates a moat for established AI companies who can afford enterprise contracts, while startups face higher barriers to training competitive models. But it also establishes precedent that content creators deserve compensation, which could cascade to other knowledge bases, forums, and user-generated content platforms that currently get scraped freely.

Bottom line: Wikipedia's monetization marks the moment the open internet formally died and became an AI input layer with licensing fees. The organization spent 25 years building the world's most trusted knowledge repository by rejecting ads and corporate influence, and now packages that trust into a product for the same tech giants that would have corrupted it. The genius is that Wikipedia threaded the needle—it found a business model that sustains infrastructure without compromising neutrality, because the content is already created and the revenue comes from access fees rather than editorial influence. But the shift reveals the broader truth: anything valuable on the internet will eventually be monetized once AI companies demonstrate they'll pay for quality training data. The question isn't whether other platforms follow Wikipedia's path, but how quickly.

Other News

OpenAI signed a $10 billion deal with Cerebras for 750 megawatts of compute through 2028, diversifying beyond Nvidia while Sam Altman's prior investment in Cerebras creates alignment between OpenAI's infrastructure needs and his personal portfolio returns.

WhatsApp exempted Brazil from its third-party AI chatbot ban days after regulators ordered the suspension, proving platform power bends to national authorities when competition agencies coordinate investigations across Italy, Brazil, and the EU.

Apple battles Nvidia for TSMC foundry capacity as AI chip demand shifts power from Apple's 15-year dominance to a bipolar world where smartphone growth can't compete with data center scaling and the company that drives wafer demand controls the future.

Transition Metal Solutions raised $6 million to boost copper extraction 20-30% using microbial additives, turning supply chain constraints into venture opportunities as AI's infrastructure demands force investment in unconventional solutions like probiotics for mines.

RAM prices surge globally as AI data centers compete with consumer hardware for memory supply, exposing how AI's insatiable resource appetite triggers cost inflation across the entire technology ecosystem beyond just GPUs and chips.

GM settled the FTC's data-sharing enforcement order, demonstrating regulators now weaponize privacy violations against hardware companies that monetize user data, forcing businesses to recalculate whether surveillance economics justify compliance risk.

Palantir's ICE app surfaces operational deployment in enforcement raids, highlighting reputational risk for AI companies embedded in controversial government use cases that force strategic choices about whose problems you're willing to solve for revenue.

Data center copper math errors expose infrastructure planning mistakes suggesting AI growth estimates outpace feasibility, revealing capital deployment may be moving faster than actual ability to build the physical infrastructure required.

China's renewable buildout quietly reshapes AI geopolitics as whoever controls clean power for compute centers controls next-generation advantage, making energy infrastructure as strategically important as chip manufacturing capacity.

AI Around The Web
Test Your AI Eye

Can You Spot The AI-Generated Image?

Select "Picture one", "Picture two", "Both", "None"

Login or Subscribe to participate

Prompt Of The Day

Copy and paste this prompt 👇

"I need a cold DM idea that will showcase the success stories of previous customers who have used my [product/service] and persuade my [ideal customer persona] to make a purchase with a personalized message.[PROMPT].[TARGETLANGUAGE]."

Best of AI™ Team

Was this email forwarded to you? Sign up here.

Keep Reading

No posts found