Your Daily Best of AI™ News
🚨Zanskar claims 1 terawatt of geothermal power is being overlooked in the US, revealing the infrastructure constraint limiting AI scale isn't compute or capital—it's energy, and massive untapped domestic power sources could reshape competitive advantage.
The headlines that actually moves markets
Tired of missing the trades that actually move markets?
Every weekday, you’ll get a 5-minute Elite Trade Club newsletter covering the top stories, market-moving headlines, and the hottest stocks — delivered before the opening bell.
Whether you’re a casual trader or a serious investor, it’s everything you need to know before making your next move.
Join 200K+ traders who read our 5-minute premarket report to see which stocks are setting up for the day, what news is breaking, and where the smart money’s moving.
By joining, you’ll receive Elite Trade Club emails and select partner insights. See Privacy Policy.
The Big Idea
The AI Whisperer Fallacy: Why Your Results Suck (And It's Not The AI's Fault)

"This AI is garbage."
"ChatGPT gave me terrible results."
"I tried AI and it doesn't work."
Sound familiar? Every day, thousands of people blame AI tools for bad outputs when the real culprit is staring back at them from the screen: a single-sentence prompt that expects magic.
AI isn't magic. It's a tool. And like any tool—a chainsaw, a camera, a compiler—the quality of your results depends entirely on how you use it.
The one-sentence epidemic
Here's what most people do:
"Write me a blog post about marketing."
"Create an ad for my product."
"Make this better."
Then they hit enter, sit back, and wait for brilliance. What they get instead is generic, surface-level garbage that sounds like it was written by a committee of robots (because, well, it was). They chalk it up as "AI doesn't work" and move on.
But here's the thing: the AI has no idea what you actually want.
It doesn't know your brand voice, your target audience, your product's unique positioning, or what "better" even means to you. It's not psychic. It's probabilistic. It's generating the most statistically likely response based on the microscopic amount of information you gave it.
You just asked a Ferrari to drive itself without telling it where you want to go.
Context is king
The secret to good AI results isn't a magic prompt formula or a hack you learned on Twitter. It's context—the information you provide before the ask that turns a blind guess into a targeted response.
Context includes:
- What you want to happen - The specific outcome or goal
- Who it's for - Your audience, their pain points, their language
- What you've already tried - Previous attempts, what worked, what didn't
- Your constraints - Length, tone, format, brand guidelines
- Examples - Show, don't just tell
- Your unique angle - What makes this different from generic advice
When you give AI context, you're not just asking it to generate text. You're giving it a lens through which to filter billions of possible outputs down to the handful that actually match what you need.
The 80/20 of AI prompting
Time and time again, the pattern is clear: people who get consistently good AI results spend 80% of their time on setup and 20% on the ask.
They're not better at "prompt engineering." They're better at articulating what they want.
They know their brand voice well enough to describe it. They understand their audience deeply enough to explain their pain points. They've thought through their positioning clearly enough to articulate what makes them different.
The AI isn't doing the hard thinking for them. The AI is amplifying the hard thinking they've already done.
Example of a bad prompt:
"Write a landing page for my SaaS product."
Example of a good prompt:
"Write a landing page for [Product Name], a project management tool for remote design teams of 5-20 people. Our unique angle is that we integrate directly with Figma and automatically turn design files into task lists. Target audience is design team leads who are frustrated with designers losing context between tools. Brand voice is friendly, design-forward, and slightly irreverent—think Notion meets Mailchimp. Pain point we solve: designers waste 2+ hours per week manually creating tasks from design specs. Include a hero section, three benefit sections, social proof, and CTA. Length: ~500 words."
Guess which one gets usable results?
Why this matters now:
We're at an inflection point with AI adoption. Early adopters have figured this out. They're getting 10x productivity gains, creating content that converts, and shipping faster than ever.
But the majority? They're still in the one-sentence phase. They try AI once, get bad results, and dismiss the entire category as hype.
The gap between "AI works for me" and "AI doesn't work" isn't about access to better models or secret prompts. It's about understanding that AI is a collaborator, not a magic wand.
The people who treat AI like a junior employee—giving clear instructions, providing context, sharing examples, and iterating on feedback—are getting transformative results.
The people who treat AI like a vending machine—insert coin, receive output—are getting vending machine quality.
The skill gap no one's talking about:
The real AI skill gap isn't technical. It's conceptual.
The bottleneck isn't learning how to code or understanding transformers or knowing the perfect prompt template. It's learning how to think clearly enough to communicate what you want.
Ironically, this is the same skill that makes you good at managing people, writing briefs, teaching concepts, or explaining your product. AI is just exposing who already had it and who didn't.
If you can't articulate your brand voice to a human, you can't articulate it to an AI.
If you can't explain your product's value prop in a conversation, you can't prompt an AI to write about it.
If you don't know what "good" looks like for your use case, you can't recognize when the AI nails it or misses.
AI doesn't replace thinking. It requires it.
What's next:
As AI tools get better, this context gap will get wider, not narrower.
Yes, models are becoming more capable. Yes, they're getting better at inferring intent from limited information. But the ceiling is also rising—the people who master context will be able to do things with AI that seem impossible to everyone else.
We're moving from "AI can write" to "AI can write exactly what I need, in my voice, for my audience, with my unique angle."
The difference between those two statements? Context.
So the next time you get bad results from AI and feel tempted to blame the tool, ask yourself: Did I give it enough information to succeed?
Because the AI isn't broken. Your input is.
BTW: Studies in human-computer interaction have shown that people consistently underestimate how much context machines need by 70-80%. We assume shared understanding that doesn't exist. It's called the "curse of knowledge"—and it's why most people's first AI prompts read like inside jokes to an audience of one. The AI doesn't get the joke.
Create AI Ads From Start to Finish
Have an ad concept ready but don't want to deal with expensive shoots or stock footage? ScriptKit lets you generate, curate, and edit AI ads in one platform.
What ScriptKit gives you
Generate — Create images with multiple AI models (Nano Banana, Reve) and turn them into videos with Veo 3.1 or Sora 2 Pro. Get 3 variations per prompt.
Curate — Review all your generations in one place. Select your best assets, organize by scene, and build your storyboard.
Edit — Arrange clips on a timeline, add captions, adjust timing, and export your polished AI ad in multiple formats.
Give ScriptKit a shot — go from concept to finished AI ad without wrangling teams or gear.
Today’s Top Story
Anthropic CEO publicly criticizes Nvidia at Davos

The Recap: Anthropic CEO Dario Amodei stunned attendees at Davos by publicly criticizing Nvidia despite the chipmaker being both Anthropic's largest hardware supplier and a major investor, marking a fundamental shift in how AI leaders view geopolitical strategy versus commercial relationships. The willingness to antagonize their biggest hardware dependency signals that AI labs now prioritize sovereign computing narratives over vendor partnerships as they position for regulatory favor and diversified supply chains.
Unpacked:
The geopolitical calculation is becoming explicit. Anthropic, backed by Amazon and Google with over $10 billion in funding, is racing toward a $350 billion valuation and potential 2026 IPO. At that scale, being seen as dependent on a single chip vendor creates strategic vulnerability—both to supply constraints and to regulatory scrutiny around concentration of AI infrastructure. By publicly criticizing Nvidia, Amodei signals to investors and regulators that Anthropic views hardware as a commodity input rather than a partnership moat, which positions the company as less vulnerable to Nvidia's pricing power or supply allocation decisions.
The commercial dynamics are increasingly zero-sum. Every AI lab needs massive GPU allocations, and Nvidia can't manufacture enough to satisfy demand from OpenAI, Anthropic, xAI, Google, Meta, and Microsoft simultaneously. When Nvidia decides who gets H200 and Blackwell chips first, they're effectively picking winners in the AI race. Anthropic's public criticism could be strategic positioning to justify diversifying to AMD, custom chips from Google/Amazon, or future alternatives—turning vendor criticism into negotiating leverage for better terms or priority allocation.
The investor optics matter more than the supplier relationship. Anthropic is reportedly raising $25 billion at a $350 billion valuation with Microsoft and Nvidia themselves contributing $15 billion combined. But the investor base extends far beyond Nvidia, and those investors want to hear that Anthropic has paths to scale that don't require perfect cooperation from a single vendor. Publicly questioning Nvidia demonstrates strategic thinking about supply chain resilience, which makes the company more attractive to sovereign wealth funds and institutional investors worried about concentration risk.
The timing aligns with broader AI sovereignty movements. The US, EU, and China are all pushing for domestic AI infrastructure that doesn't rely on foreign dependencies. By criticizing Nvidia—which manufactures primarily in Taiwan—Anthropic positions itself as aligned with that narrative, potentially opening access to government contracts, subsidies, or regulatory protection. This is the playbook from telecom and defense industries: make yourself the domestic champion by questioning foreign dependencies, then monetize that positioning through government relationships.
Bottom line: Anthropic's public Nvidia criticism signals that AI labs have reached sufficient scale that geopolitical strategy now trumps commercial supplier relationships. The company is betting that openly questioning their biggest hardware dependency wins more from investors and regulators than it costs in vendor goodwill—a calculation that only makes sense if they believe either alternative chips will emerge, or that Nvidia can't afford to retaliate because every AI lab is too important a customer. The move reveals how quickly power dynamics shift when an industry matures: Anthropic went from grateful supplicant for GPU allocations to confident critic comfortable antagonizing their supplier, because at $350 billion valuation with sovereign backing, they can credibly threaten to build or buy their way out of Nvidia dependence. The question is whether that threat is real or whether this is elaborate positioning while remaining structurally locked into Nvidia's roadmap for the next 3-5 years regardless of public posturing.
Other News
Lemonade launched insurance for Tesla Full Self-Driving customers, demonstrating real-time vehicle telemetry creates competitive moats where data ownership and algorithmic pricing directly disrupt century-old insurance models.
Netflix is redesigning its app to compete with social platforms for daily engagement, abandoning subscription VOD to compete with TikTok for attention as streaming growth ceilings dissolve boundaries between content categories.
India's app downloads rebounded to 25.5 billion in 2025 fueled by AI assistants, proving AI is now the primary growth engine for emerging markets and creating geographic power centers outside Silicon Valley.
Consumers spent more on mobile apps than games in 2025 driven by AI adoption, signaling practical AI tools now command pricing power previously reserved for premium gaming as utility flips spending priorities.
YouTube will let creators make Shorts with AI likenesses, commoditizing creator identity itself by allowing algorithmic replacement of human performance—either unlocking infinite content economics or destroying creator leverage entirely.
Stanford research shows AI destroys institutional legitimacy because opacity fundamentally undermines trust mechanisms that make organizations function, requiring wholesale redesign of structures built on human accountability.
OpenAI says data centers will pay for their own energy and limit water usage, preemptively negotiating public utility relationships by internalizing infrastructure costs rather than viewing regulatory pressure as temporary opposition.
Bolna raised $6.3M for India-focused voice orchestration with 75% self-serve revenue, demonstrating emerging markets are adopting AI infrastructure without US intermediaries and building parallel tech stacks.
Test Your AI Eye


Can You Spot The AI-Generated Image?
Prompt Of The Day
Copy and paste this prompt 👇
"I want you to act as a copywriting expert in social proof campaigns specializing in demonstrating value. My first suggestion request is to write a marketing campaign outline using the social proof framework to demonstrate the value and effectiveness of our product/service to an ideal customer persona. Include testimonials, case studies, and industry experts as social proof."Best of AI™ Team
Was this email forwarded to you? Sign up here.



