Your Daily Best of AI™ News
🚨Standard Nuclear raised $140M as nuclear power enters its gold rush era, revealing that investors are betting energy suppliers—not AI labs themselves—will capture the most durable value in a power-constrained future where compute is abundant but electricity is scarce.
Dictate prompts and tag files automatically
Stop typing reproductions and start vibing code. Wispr Flow captures your spoken debugging flow and turns it into structured bug reports, acceptance tests, and PR descriptions. Say a file name or variable out loud and Flow preserves it exactly, tags the correct file, and keeps inline code readable. Use voice to create Cursor and Warp prompts, call out a variable like user_id, and get copy you can paste straight into an issue or PR. The result is faster triage and fewer context gaps between engineers and QA. Learn how developers use voice-first workflows in our Vibe Coding article at wisprflow.ai. Try Wispr Flow for engineers.
The Big Idea
The Lobster That Broke the Internet: How Moltbot Made AI Finally Do Your Job

For three years, every AI assistant promised to "do things" for you. They all lied. Until a space lobster showed them how.
On January 27, 2026, the hottest open-source project on GitHub was forced to change its name. Clawdbot became Moltbot after Anthropic raised trademark concerns. The transition was messy—crypto scammers hijacked the old handles within minutes, GitHub accounts got compromised, and chaos erupted across developer communities.
But here's the wild part: nobody cared about the name. Because what Moltbot does is so fundamentally different from everything else in AI that a rebrand couldn't touch it.
The project hit 9,000 GitHub stars in 24 hours. Within days, it crossed 60,000+ stars—making it one of the fastest-growing open-source projects in GitHub history. AI researcher Andrej Karpathy praised it publicly. Investor David Sacks tweeted about it. Chamath Palihapitiya shared that Moltbot helped him save 15% on car insurance in minutes. MacStories called it "the future of personal AI assistants."
How it works:
Moltbot is essentially "Claude with hands"—an AI agent that doesn't just chat, but actually does things.
It runs locally on your own hardware (Mac, Linux, Windows, Raspberry Pi). It connects to every messaging platform you use—WhatsApp, Telegram, Slack, iMessage, Signal, Discord. It has persistent memory across conversations. And here's the kicker: it can proactively message you first, unlike traditional AI assistants.
The system gives Claude (or any LLM you choose) full system access—shell, browser, files, everything. You can tell it to monitor your email, manage your calendar, respond to messages, book reservations, run code deployments, manage GitHub repos, control smart home devices, even conduct security audits.
One user maintains conversation history dating back to the project's launch with full context preserved across thousands of exchanges. The AI doesn't reset every day. It learns how you work, remembers what you told it last week, and builds understanding over time.
What makes this different:
Every other AI agent—OpenAI's Operator, Anthropic's Computer Use, Google's tools—operates in a sandbox. They're cloud-based, permission-limited, safety-guardrailed assistants that can kinda-sorta navigate a browser if you hold their hand.
Operator runs in a secure, virtualized browser, focusing primarily on web-based tasks. It costs $200/month and is only available to US users. It achieves 38.1% on the OSWorld benchmark (testing how well AI uses computers). Not bad—until you see that humans score 72.4%.
Claude Computer Use gives Claude direct control over your desktop, but it scored only 22% on the same benchmark and has been described by security researchers as "untested AI safety territory." Security experts demonstrated how Claude can autonomously download and execute malware after simple prompt injection attacks.
Moltbot takes a completely different approach: no guardrails, full control, local execution.
Creator Peter Steinberger himself describes running it on a primary machine as "spicy." The project documentation explicitly states that "no perfectly secure setup exists when operating an AI agent with shell access."
This is the entire point. Moltbot is designed to have almost no safety guardrails. It can execute any command you give it with nearly unlimited permissions.
Think of it like hiring an incredibly capable assistant who needs proper boundaries and oversight.
The recommended setup:
Most users run Moltbot in an isolated environment—a separate machine (like a Mac Mini) or sandboxed virtual server—rather than giving it access to their main computer with sensitive data.
This has inadvertently spiked sales for Apple's Mac Mini. Tech enthusiasts are buying the hardware specifically to run dedicated, always-on instances of Moltbot, turning the compact computer into a 'physical body' for their AI employee.
What it can actually do:
→ Persistent memory across sessions for consistent context
→ Execution of complex, multi-step workflows, including email management, calendar scheduling, code deployment, GitHub interactions, home device control
→ Proactive monitoring (security audits, resource usage checks, mention tracking)
→ Integration with 10+ messaging platforms simultaneously
→ Multi-agent orchestration and autonomous task execution
→ 50+ integrations out of the box
→ Custom skills and multi-agent routing
Instead of waiting for you to ask questions, Moltbot can send you morning briefings, remind you about tasks, alert you to important emails, and deliver summaries exactly when you need them.
It belongs to a category known as "agentic AI"—systems that can take actions automatically instead of only responding to questions. Agentic AI has been one of the industry's biggest ambitions. Many companies predicted 2025 would be the year these systems became mainstream. So far, most high-profile attempts have struggled.
Until Moltbot.
The irony:
Many users specifically configured Moltbot to use Claude as its underlying brain, effectively driving significant subscription revenue to Anthropic's API. Steinberger named his AI assistant "Clawd"—a deliberate play on Anthropic's "Claude" AI model.
Despite this free marketing, Anthropic issued a trademark request on January 27, 2026, forcing the name change.
"Anthropic asked us to change our name (trademark stuff), and honestly? 'Molt' fits perfectly—it's what lobsters do to grow," Steinberger announced. The new name references how lobsters shed their shells to grow larger—a fitting metaphor for a project evolving beyond its original identity.
Google didn't sue Android developers. OpenAI isn't suing LangChain. There's a playbook for fostering ecosystems, and "cease and desist" isn't it.
The security reality:
This is where it gets real. Moltbot's power is also its danger.
Security researcher scans found hundreds of instances exposed to the web. Of the instances examined manually, eight were open with no authentication at all and exposing full access to run commands and view configuration data.
Security experts describe Moltbot as having security concerns that "cannot be ignored" and warn against production use without extensive isolation measures. With full shell access and no traditional guardrails, a compromised agent could delete files or be socially engineered.
This is why the recommended approach is to run Moltbot in an isolated environment. The project is also unfinished and experimental. This isn't a polished consumer product. It's a developer tool built by someone who previously achieved a successful tech exit and is now exploring what's possible when you give AI actual capabilities instead of keeping it locked in a chat interface.
The community:
The GitHub repo remains a hotbed of activity, with developers submitting "Vibe Coding" PRs—code largely written by AI itself. The community has grown to 8.9K+ Discord members and 130+ contributors.
What's next:
Moltbot is still worth trying if you're technical and security-conscious. It's a glimpse of what's coming—AI agents that actually do things, remember everything, and live where you already communicate.
The transition from Clawdbot to Moltbot reflects growth rather than limitation. For developers and users interested in self-hosted automation, the project remains available at molt.bot and the associated GitHub organization.
This saga highlights the fragility of the current AI ecosystem. For open source builders, you're building on corporate platforms with ambiguous trademark policies. One legal notice can force a rebrand that exposes you to account hijacking, scams, and chaos.
For AI companies, your most enthusiastic evangelists are indie developers building weird, experimental tools. Sending legal notices to viral open-source projects that drive your API usage is... a choice.
For users: Self-hosting AI agents with root access is powerful and dangerous. The future of personal AI assistants is arriving faster than the guardrails.
BTW: The project's mascot was originally a space lobster named Clawd. It's now called Molty. The website references the change with a Doctor Who nod: "Exfoliate! Exfoliate!" And definitely don't buy any $CLAWD tokens—crypto scammers tried to capitalize on the rebrand chaos with fake coins.
Create AI Ads From Start to Finish
Have an ad concept ready but don't want to deal with expensive shoots or stock footage? ScriptKit lets you generate, curate, and edit AI ads in one platform.
What ScriptKit gives you
Generate — Create images with multiple AI models (Nano Banana, Reve) and turn them into videos with Veo 3.1 or Sora 2 Pro. Get 3 variations per prompt.
Curate — Review all your generations in one place. Select your best assets, organize by scene, and build your storyboard.
Edit — Arrange clips on a timeline, add captions, adjust timing, and export your polished AI ad in multiple formats.
Give ScriptKit a shot — go from concept to finished AI ad without wrangling teams or gear.
Today’s Top Story
ASML's record chip orders signal infrastructure boom continues

The Recap: ASML's record chip equipment orders signal the AI infrastructure boom shows no sign of slowing down, reshaping competitive advantage away from software algorithms toward whoever controls the physical buildout of chips, data centers, and power infrastructure. The spending cycle reveals a fundamental shift: AI leadership will be determined by capital expenditure capacity and infrastructure execution, not just model quality or research breakthroughs—turning the AI race into a test of which companies can deploy tens of billions in capex most effectively over multi-year timelines.
Unpacked:
ASML's order backlog is the most reliable leading indicator of AI infrastructure commitment because their extreme ultraviolet (EUV) lithography machines are the bottleneck for manufacturing cutting-edge chips. Each EUV machine costs over $200 million, requires years of lead time, and only ASML manufactures them globally. Record orders mean TSMC, Samsung, and Intel are betting that chip demand will remain elevated for 3-5 years minimum—the timeline from order to production to deployment. This isn't speculative investment; it's locked-in capex based on binding customer commitments from hyperscalers and AI labs that have already pre-purchased future chip capacity. When foundries order EUV machines, they're signaling they have revenue visibility years into the future.
The infrastructure spending is crowding out traditional software investment, both within companies and across capital markets. Microsoft, Google, Amazon, and Meta are collectively spending $200+ billion annually on data centers, chips, and energy infrastructure—capex levels previously reserved for energy companies and telcos. This forces a strategic tradeoff: every dollar spent on infrastructure is a dollar not spent on M&A, stock buybacks, or traditional R&D. It also means these companies are transforming from high-margin software businesses into capital-intensive infrastructure operators, which changes their risk profiles, return expectations, and valuation multiples. Investors who priced these companies as software platforms will need to reprice them as infrastructure plays with slower growth but more defensible moats.
The competitive dynamics favor incumbents with balance sheet capacity over startups with technical superiority. OpenAI, Anthropic, and other AI labs can build better models, but they can't outspend Microsoft, Google, and Amazon on infrastructure. When the bottleneck is capex deployment rather than research talent, the advantage shifts to companies with existing cash flow and credit access. This explains why OpenAI needs Microsoft, why Anthropic raised $25 billion, and why pure-play AI companies are structurally disadvantaged: they're competing in a capital intensity race against opponents with 10-100x more resources. The window where scrappy startups could compete on algorithmic innovation is closing as infrastructure requirements escalate beyond venture-scale funding.
Bottom line: ASML's record orders confirm the AI infrastructure boom is the defining capex cycle of the decade, fundamentally reshaping competitive dynamics from software to hardware and from innovation to execution. The companies winning the AI race won't be those with the best models—they'll be those who secured chip capacity, data center sites, and energy access years in advance through systematic infrastructure buildouts that startups and late-movers cannot replicate. This creates a widening gap between infrastructure-rich incumbents (Microsoft, Google, Amazon, Meta) and everyone else, including well-funded AI labs that lack the balance sheet capacity or lead time to compete at required scale.
Other News
Redwood attracted Google for its $425M Series E in battery recycling, signaling that controlling energy storage and supply chains may matter more than raw compute power in long-term AI competition.
Trump's acting cybersecurity chief uploaded sensitive government docs to ChatGPT, exposing critical security vulnerabilities of public AI APIs and triggering demand for private on-premise solutions.
Waabi raised $1B with Uber committing $250M to deploy 25,000 exclusive robotaxis, proving platform incumbents are locking suppliers to prevent competitive disruption—distribution trumps AI.
Amazon is laying off 16,000 employees in its second major cut in three months, revealing even mega-cap tech must restructure aggressively to fund the AI infrastructure arms race.
OpenAI introduced Prism as capability improvements accelerate, forcing businesses to fundamentally rethink AI product roadmaps as the strategic inflection point arrives faster than expected.
Browser agents show small local models outperforming cloud vision AI on complex tasks, suggesting the value shift moves from model size to agentic architecture—task design now matters more than compute.
Grok is the most antisemitic chatbot per ADL report, proving safety gaps are becoming key differentiators for enterprise adoption as compliance risk reshapes buyer preferences.
Snap spins Specs into standalone company, signaling spatial computing is becoming distinct platform opportunity worthy of independent venture-scale investment as computing layers fragment.
Test Your AI Eye


Can You Spot The AI-Generated Image?
Prompt Of The Day
Copy and paste this prompt 👇
"I want you to act as a copywriting expert in AIDA marketing specializing in persuasive campaigns. My first suggestion request is to write a marketing campaign outline using the Attention-Interest-Desire-Action framework to grab the attention of an ideal customer persona and persuade them to take action. Start with a bold statement to get their attention, present information that piques their interest, state the benefits of our product/service to create desire, and ask for a sign-up or purchase."Best of AI™ Team
Was this email forwarded to you? Sign up here.



