Daily Edition

The Daily Grit

Monday, February 17, 2026

Artwork of the Day

Artwork of the Day

Beneath the weight of midnight seas,
the coral hums its ancient light —
cyan threads through indigo,
a language older than the dark,
that only silence learns to read.

Faces of Grit

Portrait of Tenzing Norgay

Tenzing Norgay

The Sherpa who stood on top of the world

On the morning of May 29, 1953, at 27,900 feet on the Southeast Ridge of Everest, Tenzing Norgay and Edmund Hillary faced a forty-foot vertical rock wall -- the last obstacle before the summit. The oxygen was thin, the wind brutal, and every step upward was a negotiation with death. Tenzing, who had already attempted Everest six times before, wedged himself into the gap between the rock face and an overhanging cornice of ice, and inched his way up. He had no reason to believe the seventh attempt would be different. He had every reason to try anyway. Born in the village of Thame in the Khumbu region of Nepal, Tenzing grew up in the shadow of Chomolungma -- the mountain the Sherpa people called Mother Goddess of the World. He ran away from home twice as a young boy, drawn to Darjeeling where foreign expeditions hired Sherpas as porters. By his twenties he had become one of the most experienced high-altitude climbers alive, but to the Western mountaineering establishment, he was still support staff -- someone who carried loads so that others could claim glory. At 11:30 that morning, Tenzing and Hillary took the final steps together onto the summit. Tenzing planted flags of Nepal, India, Britain, and the United Nations in the snow and buried an offering of chocolate and biscuits -- a gift to the mountain gods. When asked who stepped on the summit first, he refused to say, insisting they reached it almost together, as a team. He spent the rest of his life training the next generation of climbers as director of the Himalayan Mountaineering Institute in Darjeeling, ensuring that the mountains would never again be a place where Sherpas were invisible. He proved that grit is not just about reaching the top -- it is about coming back down and lifting others up.

Alibaba releases Qwen3.5, its most capable open-weight model yet

Alibaba launched Qwen3.5, a hybrid architecture model combining linear attention and mixture-of-experts with only 17 billion active parameters per query. The model aims to match top Western models while remaining open-weight, signaling that China's open-source AI race shows no sign of slowing. The release drew nearly 400 points and 183 comments on Hacker News, with developers praising the increasingly competitive landscape of freely available frontier models. The multimodal-native design positions Qwen3.5 as a strong contender for agent-based workflows and tool use.

A 14-year-old's origami pattern holds 10,000 times its own weight

Miles Wu, a 14-year-old from California, developed an origami-based folding pattern for emergency shelters that can support 10,000 times its own weight while remaining flat-packable and cheap to manufacture. The design draws from mathematical origami principles to create structures that are simultaneously rigid when deployed and collapsible for transport. The story earned 442 points on Hacker News, where commenters noted the intersection of materials science, mathematics, and humanitarian engineering. Wu's shelters could be deployed in disaster relief zones where traditional construction is impractical.

Study finds self-generated agent skills are largely useless

A new research paper found that when AI agents are allowed to generate their own skills and tool libraries, the resulting artifacts are rarely reusable or generalizable. The study tested multiple agent frameworks and found that skills created during one task almost never transferred successfully to new contexts, challenging the premise behind self-improving agent architectures. The paper drew 280 points and 120 comments on Hacker News, with practitioners sharing similar frustrations. The finding suggests that human-curated skill libraries remain far more effective than auto-generated ones.

Anthropic and the Pentagon clash over Claude's military use

Anthropic is pushing back against Pentagon requests for unrestricted access to Claude, insisting on contractual guarantees against use in autonomous weapons systems and mass domestic surveillance. A reported 200 million dollar contract hangs in the balance as the two sides negotiate acceptable use boundaries. The dispute highlights the growing tension between national security demand for AI capabilities and the safety commitments that differentiate Anthropic from competitors. The company's refusal to grant blanket access stands in contrast to other AI labs that have signed military contracts with fewer restrictions.

What your Bluetooth devices reveal about you

A detailed technical investigation reveals how Bluetooth Low Energy advertising packets leak persistent identifiers from phones, headphones, fitness trackers, and other devices, enabling continuous tracking of individuals across locations. The researcher built a tool called Bluehood that passively captures and correlates these signals, demonstrating how a network of cheap receivers could reconstruct daily movement patterns. The post earned 328 points and 131 comments, with security researchers noting that most users have no idea their devices are broadcasting identifiable signals. Mitigations exist but are poorly implemented across most consumer hardware.

India hosts major AI summit as Fractal Analytics' muted IPO signals investor caution

India opened a four-day AI Impact Summit in New Delhi attended by executives from OpenAI, Anthropic, Nvidia, Microsoft, and Google, alongside heads of state pushing for a Global AI Commons. Meanwhile, Fractal Analytics -- India's first AI company to go public -- had a muted IPO debut, as enthusiasm for AI technology collided with jittery investors in the wake of a broader sell-off in Indian software stocks. Sam Altman revealed that India now has 100 million weekly active ChatGPT users, the largest student user base worldwide. Blackstone also backed Indian startup Neysa with up to 1.2 billion dollars to build domestic AI compute infrastructure.

ByteDance restricts Seedance after Disney threatens legal action

ByteDance announced restrictions on its AI video tool Seedance 2.0 after Disney, Paramount Skydance, the Motion Picture Association, and SAG-AFTRA all sent cease-and-desist letters over rampant generation of copyrighted characters. The tool had gone viral for its ability to reproduce Disney characters, replicate actors' voices, and recreate entire fictional worlds with alarming fidelity. Japan also opened an investigation into potential copyright infringement involving anime characters. The case highlights a growing legal fault line: copyright law was built for a world where copying took effort, and AI-generated reproductions challenge every assumption about enforcement.


AI is destroying open source, and it's not even good yet

Jeff Geerling argues that AI companies are strip-mining open source projects for training data while contributing nothing back, creating a one-way extraction pipeline that threatens the sustainability of the ecosystem that modern software depends on. The post examines how AI-generated pull requests flood maintainers with low-quality contributions, while AI-powered code completion tools reproduce copyleft code without attribution. With 196 points and 145 comments on Hacker News, the piece struck a nerve with maintainers who report spending increasing time triaging AI-generated issues. Geerling contends that without structural changes to how AI companies interact with open source, the commons that enabled the AI revolution will collapse under the weight of its own exploitation.


Simon Willison introduces Chartroom and datasette-showboat for agent-driven documentation

Simon Willison released two new tools extending his Showboat system, which helps coding agents create rich Markdown documentation of their work. Chartroom is a CLI charting tool designed to integrate with Showboat, while datasette-showboat is a Datasette plugin that enables real-time remote publishing -- so developers can watch agent-generated documents build incrementally on a web server as the agent works. The tools exemplify Willison's approach of running parallel coding agents and building infrastructure that makes their output observable and useful. He notes that simply telling Claude Code to run the tool's help command is enough for it to learn the full workflow, a testament to well-designed CLI documentation serving as ad-hoc skill files.


Ricursive Intelligence raised 335 million dollars at a 4 billion dollar valuation in four months

Ricursive Intelligence, a nascent AI chip startup, raised 335 million dollars at a 4 billion dollar valuation just four months after founding, driven entirely by the reputation of its team. The founders are so prominent in the AI hardware world that every major lab attempted to hire them before they decided to strike out on their own. The deal underscores how concentrated AI talent has become -- a small number of people with the right expertise can command billions in capital before shipping a single product. The investment signals continued conviction that custom silicon for AI workloads remains a massive opportunity despite the dominance of Nvidia.


Editorial illustration for r/AI_Agents

What's the best AI to pay for right now? (2026)

73 points · 79 comments

A straightforward question that drew a rich thread of real-world comparisons between ChatGPT Plus, Claude Pro, Gemini Advanced, and Perplexity Pro. The consensus is shifting: Claude is winning converts for deep reasoning and long-context work, ChatGPT remains the all-rounder but users report recent quality declines, Gemini is strong within the Google ecosystem, and Perplexity dominates citation-based research. Multiple commenters noted that the answer changes month to month as models rapidly iterate, with one advising to simply ask again next month.

Codex this month -- double limits, best SOTA model. You will do more with less. Next month, ask again. AI world is moving fast.

— bakawolf12322 pts

Took me too long to lean into Claude, but I bought a sub recently and genuinely enjoy the interaction. It does a much better job trying to understand what I'm after and being a good thought partner.

— musicsurf9 pts

Claude Pro, easily. The context window is massive and it doesn't lose the thread halfway through like some others do.

— Outhere99777 pts
Read full thread ↗

I've been running AI agents 24/7 for 3 months. Here are the mistakes that will bite you.

29 points · 12 comments

A homelab user shares hard-won lessons from three months of running AI agents around the clock. The top mistakes: vague config boundaries that led an agent to reply to spam and like random social media posts, exposing API ports to the internet without authentication, and not having a kill switch for runaway behavior. The post offers specific fixes for each problem, including binding to localhost only, using SSH tunneling, and setting up a Telegram command for emergency shutdown. The thread attracted other users building similar SDLC pipelines with multi-agent architectures.

Setting up OpenClaw on an old 2015 MacBook Pro with Opus 4.6 to build a full SDLC pipeline -- requirements, PRDs, architecture, execution, testing, and PR reviews. Realizing that persona files need much better role definitions.

— livinglogic1 pts

The pattern across all these mistakes is unclear handoffs between human intent and agent autonomy. The fix: explicit scope of authority, full execution traces, and progressive permissioning.

— Beneficial-Panda-6401 pts
Read full thread ↗

How are you preventing skill atrophy after using AI for so long?

17 points · 19 comments

A developer who has relied on AI tools for two years notices their raw problem-solving muscle feels weaker -- SQL and regex from memory have been replaced by tab-completion, and blank-screen paralysis sets in when AI tools are down. The thread produced a standout response from a 20-year telecom veteran who reframes the issue: forgetting regex syntax is memory offloading, not cognitive decline. The real flag is losing the habit of working through ambiguity without immediate feedback. The fix is not no-AI days but doing harder things where AI cannot help yet.

20+ years in telecom. The SQL/regex thing isn't skill atrophy -- it's memory offloading. You also can't recite phone numbers anymore. The skill was never memorizing syntax. It was knowing when you need a regex and what it should match.

— Pitiful-Sympathy392743 pts

The Reviewer vs Creator shift is what happens when you move up in any field. The danger isn't relying on AI -- it's trusting it without verifying.

— Coffee_And_Growth2 pts
Read full thread ↗

Editorial illustration for r/ClaudeCode

OpenClaw creator joins OpenAI -- let the forks begin

234 points · 51 comments

The biggest story of the day in the Claude Code community: OpenAI hired Peter Steinberger, creator of the open-source agent framework OpenClaw. The post triggered a wave of concern about the project's future, with the community calling on Anthropic to respond. Commenters noted the irony that Anthropic's legal team had previously sent threats to Steinberger, which may have pushed him toward OpenAI. Others pointed out that OpenClaw was already heavily built with OpenAI tokens and that OpenAI letting Steinberger use their name while Anthropic sent lawyers tells you everything about how the two companies operate.

If you can't go viral, buy out the one who can.

— hello534632 pts

If they make it anything like Codex it will be excellent. The entire thing is vibe coded with Codex already -- basically just OpenAI tokens in a GitHub repo.

— dashingsauce32 pts

Anthropic was sending threats from their legal team -- that's how the creator ended up at OpenAI in the first place.

— charlierguo4 pts
Read full thread ↗

Claude Code for mobile: tmux + Termius + Tailscale

127 points · 70 comments

A practical showcase post revealing a clean mobile workflow for running Claude Code from an iPhone using tmux for session persistence, Termius as a mobile terminal, and Tailscale for secure networking. The approach resonated with users who had been struggling with half-baked mobile apps, with the top comment noting it has worked perfectly since forever and the real question is why people keep building inferior alternatives. Others shared variations including Proxmox plus WireGuard plus Termux, and one developer built an open-source browser-based workspace manager called Myrlin Workbook with cost tracking and session grouping.

Please don't give my setup away.

— rasbid42020 pts

Got tired of the SSH/tmux layer so I built a browser-based workspace manager instead -- real embedded terminals behind a Cloudflare tunnel, with session persistence, cost tracking, and workspace-linked docs.

— TheRealArthur14 pts

I use the Happy app to connect to my Claude Code instance.

— Formal_Bat_31098 pts
Read full thread ↗

Agentic coding is amazing -- until you hit the final boss

94 points · 56 comments

A developer running a fully agentic workflow across Django, Next.js, and Electron for six months describes the wall every AI-driven team hits: end-to-end testing. The agent excels at generating Playwright code but cannot produce stable tests without deterministic fixtures, seeded databases, and strict data-testid contracts. Without these, the agent chases flaky UI state endlessly. One commenter argued that E2E is only the second-to-last boss -- the true final boss is tasteful UI/UX, which no agent can produce reliably. The thread produced actionable advice: reduce change volume between test runs, give agents access to screenshots via Playwright, and treat the agent like a junior QA.

Have you considered reducing the volume of change before each automated test run? Also, giving Claude access to screenshots of your application made a huge difference for me.

— AggravatinglyDone38 pts

E2E is the second-to-last boss. The final boss is having a tasteful UI/UX. Even if Claude knocked out E2E perfectly, it still can't catch bad aesthetics.

— TeamBunty33 pts
Read full thread ↗

Editorial illustration for r/SaaS

A blocklist of every Reddit lead-gen spammer in r/SaaS

116 points · 41 comments

A community hero compiled a detailed blocklist of over 20 accounts and their associated products that have been flooding r/SaaS with thinly disguised lead-generation spam. Every account on the list sells some variation of Reddit lead-gen services, and blocking them all reportedly makes the subreddit usable again. The post was universally celebrated, with the top comment receiving 70 upvotes simply saying 'Make this guy a mod.' Others called for subreddit-level bans and noted that several products appear to operate multi-account networks for additional astroturfing.

Make this guy a mod.

— Aexxys70 pts

Let's get some universal blocking at the subreddit level. Is that possible, mods?

— conjectureobfuscate31 pts

Best post I've seen in this sub in a year. Gojiberry is definitely using multiple accounts.

— srilankan7 pts
Read full thread ↗

I love when founders ask for microservices because I can bill them double

94 points · 33 comments

A freelance developer confesses the quiet truth of the contracting world: when non-technical founders request microservices and Kubernetes for an MVP, they are unknowingly giving permission to over-engineer everything. A simple session becomes a separate auth service at 10 billable hours. One SQL database becomes three syncing via events. A five-dollar VPS becomes a complex AWS cluster. The post is not about scamming -- it is about founders dictating architecture they do not understand. The top reply, from a 20-year SaaS veteran, delivered the real lesson: if your dev agrees with your architecture choices instead of pushing back, they are optimizing for their invoice, not your product.

99% of early-stage SaaS needs one repo, one database, deployed on a $20/month server. The biggest tell that a dev is going to overcharge you is when they agree with your architecture choices instead of pushing back.

— mrtrly30 pts

OpenAI has hundreds of millions of users with a single PostgreSQL write database. Setting up hosted Kubernetes is a few CLI commands away. If it adds weeks of work, find someone better.

— Tupcek7 pts
Read full thread ↗

PhD researcher who can't code built a SaaS with vibe coding -- $1K MRR in 25 days

58 points · 62 comments

A bioinformatics PhD who cannot write React built Plottie, an AI tool for publication-ready scientific figures, reaching 2,000 users and 1,000 dollars MRR in 25 days using Claude, Cursor, and what the poster calls vibe coding -- not understanding half of the codebase but having tests pass. The key strategy was building a free SEO-magnet discovery site first, then funneling users to the paid creation tool, mirroring the Ahrefs playbook. The poster launched with a paid beta rather than free, noting that paying users provide dramatically better feedback. Commenters were split between genuine praise and skepticism, with practical advice to add PostHog for session replay and implement AI cost kill-switches early.

The discovery site as a top-of-funnel play is really smart. Building the free SEO magnet first and letting it feed the paid tool is basically what Ahrefs did.

— m2e_chris10 pts

Building an academic tool for over 6 months with zero coding experience. 320+ on the waitlist. Worried about reputation if the product is poor.

— Wise_Restaurant32907 pts
Read full thread ↗

Editorial illustration for r/DigitalMarketing

Is Google consolidating the web? 46% of cited domains just disappeared.

33 points · 13 comments

A research team analyzed 100,000 keywords across 20 niches after Google's transition to Gemini 3 as the default AI Overviews model and found that 46 percent of previously cited domains vanished from results. Before the rollout, only 0.11 percent of AI Overview responses lacked source citations. After Gemini 3, that number jumped above 10 percent. The remaining citations increasingly favor Google's own platforms, with YouTube appearing as a top source even in medical queries. The data suggests Google is collapsing the citation graph toward entities it already trusts rather than crawling for the best answer to a specific query, threatening small publishers who relied on long-tail informational content for traffic.

Even if it's a bug, it perfectly shows where things are headed. Google wants to be the endpoint. If this becomes the norm for medicine or finance, forget about trust in search.

— Educational-Crab-8259 pts

This will kill the internet. If content creators stop getting clicks, they stop writing. Where will Gemini 4 get its training data from? It's a road to nowhere.

— JosephineAllard_SEO5 pts
Read full thread ↗

Low workload, stable job -- how would you optimize for income growth?

28 points · 13 comments

A digital marketer with a full-time job but only about 8 hours of real work per week asks how to use the remaining bandwidth to increase income. The thread delivered practical advice rather than platitudes. The top response, from someone in the same position a year earlier, treated the free time as paid R&D and focused on learning high-income skills rather than starting a second business. Others recommended freelancing in the same domain to compound existing expertise, building a personal brand through content, or using the time for certifications that unlock higher-paying roles.

Was in this exact spot last year. I realized it was basically paid R&D time. I didn't want a second job -- I wanted a better first one.

— Negative_Onion_91974 pts

If you have the mental bandwidth, invest in learning a high-income skill. Starting a business requires time and effort and there will be risk.

— ranveerneemkar3 pts
Read full thread ↗

What's the ONE most important marketing skill in the AI age?

15 points · 34 comments

A developer new to marketing asks what single skill matters most when AI can handle content generation, ad optimization, and testing. The thread converged on a clear answer: understanding real customer pain and translating it into clear messaging. AI can generate content and run ads, but it cannot replace knowing why people care or what makes them trust you. Strategy and critical thinking were repeatedly cited as the skills that compound regardless of what tools change. One commenter emphasized distribution over everything else, noting that most developers assume a good product will be found organically.

There's no single one important thing. You need to understand what you plan to promote. Content marketing -- how to craft messaging that converts without sounding like an ad.

— Life-Tailor73125 pts

Understanding real customer pain and turning it into clear messaging. AI can generate content and run ads, but it can't replace knowing why people care.

— GrowthInSilence1 pts
Read full thread ↗

Editorial illustration for r/Philosophy

MIT's free Paradox & Infinity course starts February 17

88 points · 3 comments

The 2026 run of MIT's free online course Paradox and Infinity begins today, covering mathematical and philosophical puzzles including Zeno's paradoxes, Godel's incompleteness theorems, computability, and set-theoretic paradoxes. The course is offered through MITx Online and requires no prerequisites beyond curiosity. A commenter also flagged that MIT's Introduction to Philosophy: God, Knowledge, and Consciousness course launched the same day. The post was the highest-scoring philosophy submission of the day, reflecting the community's appetite for rigorous, accessible education at the intersection of math and philosophy.

Introduction to Philosophy: God, Knowledge, and Consciousness also starts today.

— orngchckn11 pts
Read full thread ↗

Nietzsche's philosophy of mind: beyond truth and falsity in thought

5 points · 5 comments

A video essay argues that Nietzsche marks a decisive rupture in the history of thought by exposing how the demand for unconditional truth operates as an ascetic impulse to arrest becoming -- a disguised will toward nothingness. The thesis draws on Schopenhauer, Deleuze, and Nietzsche's own critique of the will-to-truth as a morally and physiologically invested discipline rather than a neutral power. The comments produced a substantive exchange: one critic invoked Rorty's claim that philosophy-as-intellection began with Kant, and questioned whether the Greeks saw thought as derived from language. The poster pushed back, noting that noesis literally means thought and that the critic was arguing with strawmen.

Rubbish. Kant taught us philosophy was a specifically intellectual endeavor -- something not even the Greeks believed. For the Greeks it was noesis developed through the dialectic.

— happiness77340 pts

You know what noesis means, right? You are arguing with strawmen and taking the opening line out of context.

— gaymossadist1 pts
Read full thread ↗

Freeflow

Free, open-source alternative to Wispr Flow and Superwhisper for voice-to-text

115 upvotes

Myrlin Workbook

Browser-based Claude Code workspace manager with session persistence and cost tracking

14 upvotes

Plottie

AI tool that creates publication-ready scientific figures from text descriptions

58 upvotes

Two Minute Papers

NVIDIA's Insane AI Found The Math Of Reality

NVIDIA researchers developed a physics-informed neural network that can discover underlying mathematical equations governing real-world physical systems, essentially reverse-engineering reality's source code from observational data alone.