Daily Edition

The Daily Grit

Tuesday, February 25, 2026

Artwork of the Day

Artwork of the Day

The copper remembers what fire began,
turquoise bloom where the rain made its claim—
each layer a year the wind never planned,
beauty the slow reward of unnamed shame.
We too oxidize, and call it becoming.

Faces of Grit

Portrait of Mary Anning

Mary Anning

The girl who rewrote the history of life on Earth

In 1811, on the wind-battered cliffs of Lyme Regis, a twelve-year-old girl chipped away at the limestone with a hammer that was too heavy for her hands. Her brother Joseph had spotted a strange skull months earlier, but it was Mary who spent the better part of a year excavating the rest — a seventeen-foot skeleton of a creature no one had ever seen. The ichthyosaur, they would eventually call it. The scientific establishment didn't know what to make of it, or of her. She was poor, she was female, she was self-taught. She was also right. Over the next three decades, Mary Anning would unearth the first complete plesiosaur, the first British pterosaur, and countless other specimens that fundamentally challenged what the world believed about creation and extinction. She taught herself anatomy, geology, and scientific illustration. She corresponded with the greatest minds of her era — yet when they published papers based on her discoveries, they rarely credited her. The fossils went to museums. The glory went to gentlemen. Mary kept digging. She survived a landslide that nearly buried her alive. She survived poverty so grinding that she once had to sell her furniture to buy food. She survived being ignored, patronized, and plagiarized by men who owed their careers to her hammer and her eye. Through all of it, she never stopped walking those cliffs, never stopped looking at the rock and seeing what no one else could see. Mary Anning died at 47, largely forgotten. Today, she is recognized as one of the most important figures in the history of paleontology — a woman who, armed with nothing but grit and curiosity, forced the world to rethink its own past.

Stripe reportedly eyeing deal to buy some or all of PayPal

Stripe is exploring a potential acquisition of PayPal or parts of the company, according to early reports. If consummated, the deal would represent one of the largest fintech mergers in history, combining two payment giants that together process trillions in annual volume. The move comes as PayPal has struggled to regain investor confidence amid slowing growth and increasing competition. Stripe, recently valued at over $100 billion, has expanded aggressively into banking, treasury, and enterprise payments. Industry analysts suggest Stripe may be most interested in PayPal's merchant network and Braintree processing infrastructure rather than the consumer-facing app.

Anthropic won't budge as Pentagon escalates AI dispute

The Pentagon has given Anthropic until Friday to loosen its AI guardrails or face potential penalties, escalating a high-stakes dispute between the Defense Department and the AI safety lab. Defense Secretary Pete Hegseth's team, which includes former Uber executive Emil Michael and private equity billionaire Steve Feinberg, is pressuring Anthropic over Claude's refusal to assist with certain military applications. Anthropic CEO Dario Amodei has reportedly refused to compromise on the company's safety commitments. The standoff raises fundamental questions about government leverage over AI vendors, defense tech vendor dependence, and the limits of AI safety policies when national security interests are invoked.

Deepseek trained latest model on Nvidia Blackwell chips despite US export ban

Chinese AI startup Deepseek has reportedly trained its latest model using Nvidia's most powerful Blackwell GPUs, circumventing the US export ban. The chips are believed to be housed in a data center in Inner Mongolia, and Deepseek is expected to scrub technical fingerprints before the model's release next week. Google, OpenAI, and Anthropic are all bracing for impact, with all three companies having reported large-scale distillation attacks from Chinese labs. Anthropic alone detected over 16 million queries from Deepseek, Moonshot, and MiniMax targeting Claude's reasoning capabilities. The impending release suggests another potential market disruption similar to Deepseek's January 2025 shock.

MatX raises $500M to challenge Nvidia in AI chips

MatX, an AI chip startup founded by former Google TPU engineers in 2023, has raised $500 million in new funding. The company is building custom silicon specifically optimized for transformer inference, aiming to offer significantly better price-performance ratios than Nvidia's general-purpose GPUs. The raise signals growing investor appetite for alternatives in the AI hardware space as demand for inference compute continues to outstrip supply. MatX joins a growing roster of challengers including Etched, Cerebras, and Groq competing for a slice of Nvidia's dominant market position.

Apple to manufacture Mac mini in Houston, touchscreen MacBooks get Dynamic Island

Apple announced a new manufacturing facility in Houston, Texas that will produce Mac mini units domestically, part of the company's accelerated US manufacturing push. Separately, Bloomberg's Mark Gurman reports that Apple's upcoming OLED touchscreen MacBook Pros, expected this fall, will feature a Dynamic Island similar to iPhones but smaller. The MacBooks will come in 14-inch and 16-inch screen sizes with touch capability for the first time. The moves represent Apple's biggest hardware evolution in years, bringing its laptop and desktop lines closer together while addressing political pressure to onshore production.

Solar power has passed hydroelectric on the US grid

Final 2025 data confirms that solar energy generation has surpassed hydroelectric power on the US electrical grid, following 35% year-over-year growth in solar capacity. The milestone marks a significant shift in the American energy landscape, as solar — once a marginal contributor — now generates more electricity than the nation's dams. Meanwhile, Google signed a massive 1.9GW clean energy deal that includes a 100-hour iron-air battery from Form Energy, designed to keep data centers running around the clock on stored wind and solar power.

More startups hitting $10M ARR in 3 months than ever before

AI has ushered in a new era of hyper-growth startups, with Stripe data revealing that more companies than ever are reaching $10 million in annual recurring revenue within just three months of launch. The trend reflects how AI-native products can achieve rapid product-market fit and scale with minimal headcount. The compressed timeline to meaningful revenue represents a fundamental shift from the traditional SaaS growth playbook, where reaching $10M ARR typically took 2-4 years. The data suggests that AI is not just creating new categories but dramatically accelerating the startup lifecycle itself.


Simon Willison: Linear walkthroughs and agentic engineering patterns

Simon Willison details a powerful pattern for understanding vibe-coded projects: having a coding agent produce structured walkthroughs of codebases. He demonstrates the approach using Showboat, a tool he built for coding agents to write documents that demonstrate their work. After vibe-coding an entire SwiftUI slide presentation app with Claude Code and Opus 4.6 without examining the code, he had Claude Code create a detailed walkthrough explaining how everything works. The key insight is that even quick vibe-coded projects become learning opportunities — Willison picked up substantial SwiftUI and Swift knowledge just from reading the agent-generated walkthrough. He argues this pattern directly addresses concerns that LLMs reduce learning speed.


DeepMind suggests AI should occasionally assign humans busywork

A new Google DeepMind paper proposes that AI systems should sometimes deliberately delegate tasks to humans that the AI could easily handle itself, specifically to prevent skill atrophy. The recommendation emerges from research on how AI agents should delegate work, addressing growing concerns that over-reliance on AI assistants could erode human competence over time. The paper examines the balance between AI efficiency and human capability maintenance, suggesting that periodic 'busywork' assignments could serve as a form of cognitive exercise. The idea sits at the intersection of AI safety, workforce development, and the philosophical question of what skills humans should preserve in an age of increasing automation.


OpenAI wants to retire the SWE-bench coding benchmark

OpenAI argues that SWE-bench Verified, the gold standard for AI coding evaluation, has become meaningless. The company identified two core problems: at least 59.4% of benchmark tasks are flawed, rejecting correct solutions because they enforce specific implementation details, and many tasks have leaked into training data. Testing showed GPT-5.2, Claude Opus 4.5, and Gemini 3 Flash Preview could reproduce original fixes from memory. OpenAI recommends SWE-bench Pro as a replacement and is building non-public tests. Critics note the strategic angle: a contaminated benchmark benefits open-source and Chinese models, potentially skewing competitive rankings in their favor.


Editorial illustration for r/AI_Agents

I let an AI Agent handle my spam texts for a week. The scammers are now asking for therapy.

686 points · 44 comments

A developer deployed an AI agent to autonomously respond to scam text messages for seven days, with spectacular results. The agent spent four hours pretending to drive to Target to buy a $500 gift card, sending absurd status updates about handsome squirrels and forgetting its purse. It sent a CAPTCHA to a scammer claiming blurry eyes — and the scammer actually solved it. One scammer eventually typed 'Please, just stop talking. I don't want the money anymore. God bless you but leave me alone.' Total cost: $1.42 in API fees. Total scammer time wasted: 14+ hours. The post reframes AI agents as defensive tools — world-class time-wasters that impose asymmetric costs on bad actors.

Wait till the scammers start using AI agents. Then, who is wasting whose time?

— mynameiskuru128 pts

$1.42 in API fees to waste 14 hours of scammer time is the best ROI I've seen in AI. But the scariest part is the agent spent 4 hours in a conversation with zero validation. Funny when it's scammers. Less funny when it's your production pipeline.

— Sharp_Branch_148927 pts

This might be the first real-world example of AI-powered defensive friction.

— PhilosophyOpening5687 pts
Read full thread ↗

My guide on what tools to use to build AI agents in 2026 (if you're a newb)

66 points · 32 comments

An experienced AI developer who builds tools and open-source projects for a living shares a practical, opinionated stack for newcomers. The top recommendation is OpenClaw for anyone who just wants a working agent today — 60k+ GitHub stars, self-hosted, connects to everything. For building from scratch: Claude Code or Codex for coding agents, MCP for tool integration, Supabase for the backend. The key insight from commenters: the real unlock isn't individual tools — it's combining them. When an agent can query your database, email, and payments in a single request, you move from 'AI assistant' to 'AI that closes the loop.'

The biggest unlock isn't individual MCPs — it's combining them. When an agent can query Supabase + Gmail + Stripe in a single request, you go from 'AI assistant' to 'AI that actually closes the loop.'

— Founder-Awesome3 pts

The loop-and-burn problem with OpenClaw is real. Those runaway loops happen because the agent hits a state it can't resolve. 'Start messy, fix later' works for most things but not when the mess is $400 of API spend overnight.

— penguinzb12 pts
Read full thread ↗

What AI agents do you actually pay for?

36 points · 34 comments

A straightforward question that cuts through the AI agent hype: does anyone actually pay for these things? The answers reveal a bifurcation. Power users are paying for Claude Max and ChatGPT Plus as coding agents, with some running Claude Code and OpenClaw on a single Max subscription. Engineering teams report that Opus 4.6 has reached a point where they 'don't really write code manually — we mostly just project manage.' Others are replacing content agencies with AI agents that train on business data to auto-generate SEO content. But a significant contingent simply answers 'None,' suggesting the gap between AI agent evangelism and actual adoption remains wide.

My engineering team pays for Windsurf Cascade on Claude Opus. With 4.6, even large refactors or features work in a single shot. We also replaced our content agency with AI agents that train on our business data.

— emilyxhug9 pts

Claude Max 20x and Perplexity. I have OpenClaw and Claude Code running on that one Max subscription — enough capacity that I don't need to pay for anything else.

— chton1 pts
Read full thread ↗

Editorial illustration for r/ClaudeCode

Claude Code will become unnecessary

524 points · 380 comments

A provocative post arguing that open-weight models like Qwen 3.5 and Kimi K2.5 are closing in on Claude's coding capabilities, and that paying for Claude Code will eventually stop making sense. The counterarguments are sharp: running local models currently requires a $10,000 Apple workstation to match something worse than Haiku, and one user on an RTX 4090 had to quantize models to near-uselessness while burning $50/month in electricity. The consensus view is that it takes roughly two years for SOTA capabilities to reach open-weight consumer hardware — but by then, Anthropic will be hosting Opus 6. The real tension isn't about model quality; it's about whether inference costs will always justify paying for the frontier.

I'm happy paying for Claude — the value is worth it. But I'd welcome a different tool for using it. Claude Code is getting worse recently. They're hiding what's going on, and I'm hitting bugs more often.

— lukaslalinsky201 pts

It takes around 2 years for SOTA capabilities to reach open weights on consumer hardware. We'll have Opus 4.6 at home eventually. But by then, Anthropic will be hosting Opus 6, and it'll still be worth running for some tasks.

— Dissentient72 pts

You need a $10,000 Apple workstation to run something worse than Haiku basically.

— Optimal-Run-52840 pts
Read full thread ↗

Claude Code just got Remote Control

332 points · 155 comments

Anthropic announced Remote Control for Claude Code, a new feature rolling out to Max users as a research preview. The concept: start a Claude Code session locally in your terminal, then pick it up and continue from your phone via /remote-control. The feature essentially kills an entire category of third-party wrapper tools — commenters noted that 'the 5 dozen I-made-a-remote-control-for-Claude projects must be losing it right now.' Others pointed out that Termux plus tmux has always offered similar functionality, questioning why the wrapper projects got traction in the first place. Some users see it as a potential replacement for tools like OpenClaw for remote Obsidian access.

The 5 dozen 'I made a remote control for Claude' projects must be losing it right now.

— PandorasBoxMaker121 pts

Long term, AI chatbots will expand and consume everything of value that touches them. If your product is just a wrapper over an AI chatbot, you don't have a product.

— truthputer9 pts

I wonder if this replaces most of the use case for OpenClaw for me: being able to update, search, get answers from an Obsidian repo remotely.

— Careless_Bat_922622 pts
Read full thread ↗

You should have a Stop hook

145 points · 44 comments

A practical tip that resonated widely: using Claude Code's hook system to enforce quality gates. The author's approach — after Claude completes a turn, if git status shows changes and tests haven't run in the past 60 seconds, force Claude to run them before finishing. This enforces an 'always green' state where Claude always exits with working code. The community shared creative variations: one developer has a stop hook that plays random ElevenLabs TTS clips of a sassy executive assistant, another rolls dice, and others shared safety hook collections and voice plugins that speak status updates when Claude Code stops.

I have a stop hook that plays random TTS clips from ElevenLabs of a sassy fed-up executive assistant. SHUT UP JANET I'LL REVIEW THAT PR IN JUST A SECOND. Yours sounds more useful.

— AfroJimbo96 pts

I have several safety hooks and a voice plugin that uses PocketTTS to speak out a short voice update whenever Claude Code stops.

— SatoshiNotMe12 pts
Read full thread ↗

Editorial illustration for r/SaaS

Talked to every customer who cancelled last quarter. Most common reason wasn't what I expected

78 points · 21 comments

A SaaS founder personally reached out to every customer who cancelled in Q3 — not a survey, but actual conversations. They expected to hear about missing features, competitor superiority, or pricing complaints. Those accounted for maybe 20% combined. The overwhelming majority said the same thing: 'We just stopped using it.' Not because anything was wrong, but because the product never became a habit. The fix was restructuring onboarding around a meaningful first-week outcome rather than feature tours. Customers who achieved one early win retained at 4x the rate of those who just 'poked around.' The thread crystallized a key SaaS insight: silent churn is always worse than loud churn.

Your biggest competitor isn't the shiny new startup — it's Apathy and the user's busy schedule. If a tool requires users to actively remember to open it, it will eventually be cancelled. The holy grail of SaaS is embedding your product so deeply it becomes invisible.

— AykutSek3 pts

Many cancellations happen in a vacuum of zero-usage days, not after a bad experience. If a customer hasn't logged in for 14 days, the cancellation is a delayed reaction to a lost habit. Passive churn is more common than active dissatisfaction.

— SurvioCommunity3 pts
Read full thread ↗

What was the biggest unexpected challenge during your first 100 SaaS users?

55 points · 48 comments

A crowdsourced thread on the brutal realities of early SaaS life. The top answer nails it: the hardest part is context-switching from builder to support. With zero users you build nonstop; with 100 users you're suddenly answering questions, fixing edge cases, and realizing that things 'obvious' to you are invisible to users. One developer's cold email tool generated 30+ support tickets per week just from DNS setup confusion — but those same frustrated users became the best word-of-mouth once they got running. The consensus: AI tools made iteration faster, but the real bottleneck was understanding friction points, not coding speed.

The biggest challenge was context switching from builder to support. One confusing step in onboarding and people just disappear silently. AI tools made iteration faster, but the real bottleneck was understanding friction points, not coding speed.

— Anantha_datta16 pts

Almost all early traction came from two Reddit threads and one long-form post. Don't try to figure out every acquisition channel with your first 100. Figure out which 1-2 have actual signal and go deep.

— Founder-Awesome3 pts
Read full thread ↗

How did you get your first 100 paying customers?

38 points · 45 comments

A pre-launch founder asks the eternal SaaS question, and the thread delivers a masterclass in distribution strategy. The most detailed answer breaks down three wildly different approaches that worked: ElevenLabs skipped blogs entirely and programmatically generated thousands of long-tail landing pages, hitting 1M+ monthly organic visitors. Suno did zero SEO but made every generated song a shareable public page, growing to 12 million users in 12 months through pure UGC virality. Speechify dominated Chrome Web Store search instead of traditional channels. The pattern: 'build and they will come' is dead. In 2026 it's 'build AND hustle AND be in the right room at the right time.'

ElevenLabs programmatically generated 150+ accent pages and 3,000 sound effect pages targeting long-tail intent. Suno made every AI song a shareable public page — 0 to 12M users in 12 months. Speechify dominated Chrome Web Store searches.

— Consistent_School9692 pts

Your first 100 paying customers probably won't come from one channel. They'll come from 15 different weird places. DMs, Reddit threads, a random comment someone screenshotted. The pattern only becomes visible in hindsight.

— vladdielenin2 pts
Read full thread ↗

Editorial illustration for r/DigitalMarketing

What resources actually keep you sharp in marketing?

37 points · 26 comments

Marketers share what genuinely sharpens their skills versus what just keeps them busy. The winning formula: long-form strategy books over daily social content. Top recommendations include Breakthrough Advertising by Eugene Schwartz for resetting how you see market awareness, and $100M Offers by Alex Hormozi for pressure-testing offer strength. For staying current, operator-led newsletters and real case studies beat theory. But the most provocative answer: 'Most resources keep you busy, not sharp. The fastest learning loop is publish, measure, tweak. The marketers I respect most are shipping something small every month, not just consuming content.'

I reread Breakthrough Advertising by Eugene Schwartz when I feel myself drifting into tactics without thinking about demand. For staying current, most sharpness comes from being inside real ad accounts and watching where money leaks.

— Puzzleheaded-Row-74910 pts

Most resources keep you busy, not sharp. The fastest learning loop is publish, measure, tweak. The marketers I respect most are shipping something small every month, not just consuming content.

— jeniferjenni3 pts
Read full thread ↗

SEO News: Google Search Console gets AI-powered reports, ChatGPT ads spotted, Google AI Mode expands to 53 languages

25 points · 12 comments

A comprehensive SEO news roundup covering several significant shifts. Google confirmed there is no 'bad title' penalty — frequent title changes don't trigger demotion, though Google may rewrite displayed titles based on page content. Google and Bing both flagged bot-only markdown versions of pages as problematic, calling them messy and a crawl load risk. ChatGPT ads have been spotted in the wild for the first time. Google expanded AI Mode to 53 new languages. The thread consensus: control is shrinking. 'SEO isn't dying — it's shifting toward credibility and being cited, not just being ranked.'

Control is shrinking. Google may rewrite titles. Sitemaps don't guarantee indexing. AI answers decide which sources get visibility. SEO is not dying — it's shifting toward credibility and being cited, not just being ranked.

— gamersecret22 pts

The markdown warning looks like Google shutting down an SEO hack early.

— SEO00Success2 pts
Read full thread ↗

How do you track LLM SEO performance? What's actually working?

15 points · 36 comments

An agency SEO practitioner reports that Google traffic is dropping hard while competitors get cited by ChatGPT and Perplexity despite schema clusters and other traditional optimizations. The thread reveals the state of play: there is no Search Console equivalent for LLM citations yet. Most teams are either running scripted prompt tests via API for directional share-of-voice data, or manually tracking mentions across AI platforms — slow but patterns emerge. SEMrush offers some citation tracking. The uncomfortable truth: brand awareness and being discussed in public forums may matter more for LLM visibility than technical optimization. Reddit posts are particularly valuable since Reddit sells data to model companies.

Most tools claiming to track LLM mentions give a pretty warped picture.

— Either-Act-34068 pts

Don't think chasing LLM citations is the main thing. Brand awareness and being talked about in public forums moves the needle way more for these models.

— Altruistic-Meal68463 pts
Read full thread ↗

Editorial illustration for r/Philosophy

Meekness isn't weakness -- once considered positive, it's one of the 'undersung virtues' that deserve defense today

248 points · 20 comments

A rich discussion sparked by philosopher Timothy J. Pawl's article on the lost meaning of meekness. The original Greek word 'praus,' used in the Gospels when Jesus calls himself meek, is the same word used for a trained horse — not weak, but with its great power subjugated to reason rather than letting anger take control. A psychologist drew a parallel to neuroscience: the limbic system (amygdala, hypothalamus) is the unwieldy steed, and the prefrontal cortex is the rider that must learn to rein it in. Another commenter argued the article's real point isn't to extol meekness but to note that modern English lacks a word that clearly means 'enduring harm with patience and without resentment.' The language drift affecting 'meekness' reflects a broader cultural loss of vocabulary for controlled strength.

The real point isn't to extol meekness — it's that modern English lacks a word that clearly means 'enduring harm with patience and without resentment.' The language drift is important because many terms have had their meanings shifted by popular usage.

— Shield_Lyger15 pts

Good to understand that attribute better. It has been treated with such contempt in this macho, preening world.

— RavelsPuppet14 pts

Being meek is to have the power to hold the handle of your sword in its sheath without feeling the need to expose the blade.

— ad1don3 pts
Read full thread ↗

Pi

A minimal terminal coding harness

190 upvotes

Moonshine STT

Open-weights speech-to-text models with higher accuracy than Whisper Large v3

146 upvotes

Nearby Glasses

Smart glasses that identify and display information about nearby objects

258 upvotes

Two Minute Papers

Adobe and NVIDIA's New Tech Shouldn't Be Real Time. But It Is.

Covers a new Adobe and NVIDIA collaboration on real-time rendering of glinty, sparkling surface materials — something that previously required expensive offline ray tracing. The technique achieves physically accurate micro-facet glints at interactive framerates.