← Tech Podcast Podcast

AI’s Stack Gets Real: Chips, Trust, and Agent Security (April 30, 2026)

April 30, 2026 · 2m 44s · Listen

AI’s stack gets very real today: chips, trust, and the suddenly urgent question of how you keep agents secure once they’re actually doing work.

This is Tech Podcast Podcast. We’re taking the best new tech-podcast conversations and turning them into a quick guide to what matters.

Let’s get into it.

First up: how the biggest AI models are actually built, trained, and served.

From Dwarkesh Patel:

The reason I think it's an important topic is because once you understand how training and inference work in a cluster, a lot of things—about why AI is the way it is, why AI architectures are the way they are, why API prices are the way they are, and fundamentally why AI progress is the way it is—start making sense. You need to understand the details to get there, and you need a blackboard to understand the details.

Right, this is the machinery behind the magic trick. Not just “bigger model, better answer” — it’s chips, clusters, latency, and pricing. If you’ve ever wondered why fast mode costs more, or why progress seems to arrive in weird jumps, this is the under-the-hood stuff.

Next, from Sophie Buonassisi:

From nearly $850M in operating losses to over $760M in operating income, Okta’s transformation is one of the most significant turnaround stories in enterprise SaaS. In this episode, Jon pulls back the curtain on exactly how it happened.

That is a huge swing — and the timing is the interesting part. Okta isn’t just selling “identity” here. It’s trying to own the trust layer for AI agents before every enterprise wakes up and realizes the bots doing the work also need to be secured.

And from Nicholas Thompson:

OpenAI’s Sam Altman sits for an interview with Nicholas Thompson, CEO of The Atlantic, to discuss AI’s trustworthiness, its dangers, and its impact on young people. Altman also discusses his company’s pledge to “stop competing and start assisting” rival projects that approach AGI, and why he thinks we’re not there yet.

That “stop competing and start assisting” line is carrying a lot. If OpenAI wants people to trust that promise, the test is whether it still holds when another lab looks like it’s getting close to the finish line.

Links to everything we covered today are in the show notes, along with a few useful source reads if you want to go deeper. Tap through to whatever caught your ear.

That’s Tech Podcast Podcast for Thursday, April 30th. This is a Lantern Podcast.