← Tech Podcast Podcast

AI’s Bottleneck Is People, Policy, and Trust (May 13, 2026)

May 13, 2026 · 4m 50s · Listen

The models are ready — the rest of us, apparently, are not. This is Tech Podcast Podcast — today we’ve got a VC pitch for Little Tech, a founder-investor conversation that actually seems worth hearing, and a security episode where deepfakes land a real job. That deepfake hiring story alone should make every remote-first company take a hard look at onboarding. All right, let’s get into it. Matt Perault, writing in a16z AI Policy Brief:

They also discuss why so many startups are effectively absent from the policy process. Founders with no lawyers, no policy staff, and no extra time rarely have the capacity to fly to Washington or Sacramento, hire lobbyists, or otherwise make their voices heard. And the cumulative weight of compliance obligations falls hardest on the small teams least equipped to manage them.

This one’s from the a16z AI Policy Brief — Andrew Chen is talking about what he calls Little Tech: the two- and three-person teams policymakers basically never hear from, building on short runways before they even have lawyers. The policy angle is the part that actually matters here. The compliance burden argument isn’t new, but saying, basically, these founders can’t fly to Sacramento because they’re busy surviving — that’s at least a real description of the problem. Chen runs a16z’s speedrun program, so he’s literally talking to founders at day one. The question is whether that turns into real operational detail, or just a VC arguing for less regulation on his portfolio — and that’s the part to listen for. Yeah, ‘policy is bad for startups’ is a very easy thing to say when your fund is full of startups. I’d want specifics here — which rules, which costs, what’s actually hurting them — or if it just stays at vibes-level advocacy. Here's slice pod:

Most of the AI conversation in venture is about tools. Who’s running Claude Code, who built the better agent stack, who’s moving faster. Gadi and Daniel are making a different argument. The tools are already everywhere, that’s not the constraint anymore. The constraint is formation.

Slice pod did an episode with Daniel Ha and Gadi Borovich from Antigravity Capital. Their argument is that AI tools are already commoditized, so the real constraint is formation — people who’ve built the instincts to use them well because they came up in environments that forced it. I’ll take that over another ‘we use Claude Code internally’ victory lap. What’s interesting is Gadi tying the same thesis to the Puentes program — finding engineers in Latin America who have the ability but never had the infrastructure. That’s a real position, not just a vibe. Daniel Ha built Disqus to four million websites and two billion monthly users before most people knew what product-market fit meant. Gadi literally tracked the Wefunder office across three addresses until somebody let him in. These are not people who learned founder grit at a summit. The question I’d want answered is whether they get into the operating specifics — how they actually run the fund differently — or whether it stays at thesis altitude. Formation as a concept is interesting. Formation as a repeatable process is the reason I’d queue it. This one's from InfoSec Today:

Meta’s smart glasses promise privacy “designed for you” – but everything they record was being beamed off to workers in Nairobi to label by hand. When those workers blew the whistle, Meta sacked all 1,108 of them.

Smashing Security episode 466 covers three things: Meta’s smart glasses secretly routing footage to human labelers in Nairobi, a hyped Linux bug called Copy Fail, and a security researcher who got his deepfake clone hired through a video interview. The Meta story is the one. ‘Privacy designed for you’ — and the fine print is a thousand contractors in Nairobi drawing boxes around your coffee cup until they complained and got fired. All 1,108 of them. The deepfake hiring segment is the one I’d actually queue — Jake Moore walking through how a synthetic video clone passed a real job interview is a concrete demonstration, not just a warning. Copy Fail gets a logo and a dedicated website and I immediately trust it less. That’s a PR campaign wearing a CVE costume. You’ll find links to everything we covered today in the show notes, so if a story stuck with you, that’s the place to dig in a little further.

That’s Tech Podcast Podcast for this Wednesday, May thirteenth. Thanks for listening. This is a Lantern Podcast.