AI moved fast this week — investment moats, AI-generated music, and somebody declaring software basically worthless. Welcome to Tech Podcast Podcast — we listened so you can decide what's worth your commute. Today we've got Patrick O'Shaughnessy going deep on capital allocation, a16z being a16z, Suno's founder making the case that everybody's a musician now, and somebody torching the whole software industry in one sentence. That last one — I want to know if there's an actual argument under the headline, or just a hot take with good SEO. My money's on the hot take, but let's see. From SignalCast:
Compute Allocation Framework: Anthropic runs daily meetings to allocate compute across three buckets: model training, internal acceleration, and customer inference. A non-negotiable floor protects model development spend even when customer demand spikes. The company uses three chip platforms—AWS Trainium, Google TPUs, and NVIDIA GPUs—fungibly, switching workloads between morning inference runs and evening training jobs on the same hardware to maximize utilization.
Patrick O'Shaughnessy sat down with Anthropic's Krishna Rao, and this one actually has operating detail — specifically how Anthropic manages compute allocation at scale, which most guests won't touch with real specifics. Daily meetings to split compute across training, internal acceleration, and customer inference, with a hard floor protecting model dev spend no matter what — that's the kind of thing that usually gets flattened into 'we have a rigorous process.' Glad somebody let it breathe. The cone-of-uncertainty planning framing is worth flagging too — they're modeling a range of exponential growth scenarios instead of point estimates, because small changes in weekly growth rates compound fast over 12 months. That's a real lesson for any infrastructure-heavy startup, not just AI labs. Nine billion to thirty billion run rate in a single quarter, and net dollar retention above 500% annualized — I'll be honest, those numbers make almost everything else in the episode feel like setup. Queue this one if you want the unsexy compute ops story; just know the revenue figure is going to be the clip everybody shares. Here's Turner Caldwell at SignalCast:
US Minerals Gap: America sits roughly 50 years behind China in critical minerals processing capacity, and permitting reform alone cannot close that gap. The real bottleneck is construction and ramp-up speed after licensing. Mariana Minerals targets this phase specifically, using reinforcement learning to autonomously control refinery operations and eliminate dependence on scarce specialized labor.
This is a16z's Turner Caldwell on SignalCast — and the throughline across Mariana Minerals and Heron Power is that America's physical infrastructure is the actual AI bottleneck, not the models. Fifty years behind China in minerals processing is the kind of number that should hit harder than it does. And the point about permitting being the wrong obsession — the real drag is construction speed after you get the license — that's a more honest frame than you usually hear from a VC. The Mariana pitch is specifically that reinforcement learning can run a refinery autonomously — thousands of daily adjustments, heterogeneous feedstock — because there literally aren't enough humans who know how to do it. That's a labor scarcity argument, not a cost-cutting argument, which is a different and more defensible thing to say. Heron Power replacing century-old mechanical transformers with silicon carbide and software is genuinely interesting hardware, but I want to know whether Caldwell pushed back on timeline at all, or if this was just a clean product demo with a16z talking points laid over it. Sonya Huang, writing in Sequoia Capital:
Most music platforms assume you’re a listener. On Suno, 90% of daily users make something. Founder and CEO Mikey Shulman explains why that flips the model: the act of creating IS the entertainment, with closer parallels to gaming and Claude Code than to Spotify.
Suno CEO Mikey Shulman on the Sequoia podcast, and the number that anchors the whole conversation is this: ninety percent of daily users are making something, not just listening. His frame is that creation is the entertainment — he's comparing it to gaming, not Spotify. The actually interesting part is the technical bet — they deliberately ignored music theory and modeled raw audio at the sample level. Shulman's point is that if you encode the twelve-tone system, you've already capped the output. That's the kind of architectural decision that's easy to miss in a founder podcast. He also flags that music doesn't scale the way LLMs do, which cuts against the usual 'more compute fixes everything' narrative. Whether Sonya Huang pushed hard on what that actually means for the business — that's what I'd want to know coming out of this one. Sequoia's house podcast tends to let founders run, so I'd temper expectations on follow-up sharpness. But the raw technical disclosure here is real — autoregression over diffusion specifically because they wanted full songs, not crisp clips. That's a concrete trade-off, not a vibe. Tyler Hogge, writing in GTMnow:
Tyler Hogge helped take Divvy from zero to a $2.5B acquisition by Bill.com. As former Partner at Pelion Ventures, he argues that charging for software is dead, per-seat pricing is collapsing, and the next decade of venture-scale companies will be built on outcomes, not subscriptions.
GTMnow brought on Tyler Hogge — helped build Divvy to a two-and-a-half billion dollar exit — and the headline he's pitching is that software is basically worth zero now. The moat moved, and product isn't it anymore. Love the provocative frame, but that's also a very convenient thing for a VC to say right after they already made their money on the product exit. 'The old thing is dead, here's what I'm betting on next' — classic positioning. Fair. The question is whether Hogge actually gets into the operating specifics — what the new moat looks like, how you build for it — or whether it stays at the level of the bumper sticker. GTMnow usually does okay on the go-to-market mechanics, so there's a chance. But if the whole episode is 'distribution is the moat now' without a single concrete example, I'm out by minute fifteen. Got a tip, correction, or a tech story you think we should be watching? Send it our way at techpodcastpodcast at lantern podcasts dot com. We really do read what you send.
You'll find links to every story we covered today in the show notes, so if something deserves a closer look, that's the place to pick it up.
That's Tech Podcast Podcast for today. Thanks for listening, and we'll be back with you next time. This is a Lantern Podcast.