The Pyramid is Hollowing from the Wrong Side
May 15, 2026
Sixty years ago, a philosopher named Michael Polanyi wrote that we can know more than we can tell. Real expertise sits below the level of language. The craftsman's feel for the material. The artists vision for a piece. The musicians taste on a melody. You can't fully say it out loud, which means you can't write it down and hand it to someone. The only way anyone has ever transmitted it is by being near someone who has it. You stand next to them, do the work badly, do it less badly, and years in, you have a version of it yourself.
Economists have been making the same point for sixty years. Gary Becker, writing in the sixties, argued that the work itself is what closes the gap between a junior and a senior, more than school. School gets you in the door. It's the work makes you good at it.
Stitch the two together and you get the dynamic almost nobody is naming when they talk about AI and jobs. The people running companies today got there by doing thousands of hours of work they would now hand to an AI in two minutes. That work built the judgment. And the judgment is what they sell. By eliminating the work, companies are eliminating the thing that built their own judgment in the first place.
There's a precedent for this, at smaller scale, in a field nobody is looking at right now. In 2019, Matt Beane at UCSB published a two-year ethnographic study. He basically embedded himself with surgical residency programs as robotic surgery was being adopted. Traditional open surgery needed four hands. The robot let a single expert operate alone. And so the trainee became optional.
Beane found that residents on robotic procedures got ten to twenty times less practice than residents on traditional ones. They graduated licensed to operate the robot but missing the fundamental surgical skills their predecessors had built without realizing it. Inside what looked like a productivity gain, the training had collapsed.
That study ran ten years before AI hit knowledge work at scale, long enough to leave a paper trail. The theory turned out to be right. Residents who could operate the new tools, but couldn't operate without them.
And the current data lines up. Stanford found a 13% relative decline in entry-level employment in AI-exposed occupations since late 2022, with software developers down nearly twenty percent on their own. Workers over 35 in the same fields are stable or growing. Firms are cutting the apprenticeship layer. The class above it is fine, for now.
I build AI tools for institutional investors. On the platform we run, the queries that have grown fastest in the last twelve months are the ones analysts could have answered themselves but didn't want to. The work felt like grunt work, and now there's an out. AI is taking exactly the work that used to build the analyst. Synthesis, drafting, the four hours of cross-referencing footnotes against management commentary that produced the conviction to defend a thesis. Thirty-second prompts now.
An engineer I work with put it more bluntly. If you aren't debugging and designing architecture, how do you ever learn to do those things. Claude fixes the bug before the junior has sat in the confusion of not understanding why it was a bug. That thinking, that debugging was the learning process, however tedious it might have been.
Banks have been saying it on earnings calls too. JPMorgan, Goldman, Bank of America, Citi, all signaling lower entry-level headcount, all framing it as efficiency.
And so, the pyramid isn't hollowing from the bottom. It's hollowing from the wrong side.
The cuts are at the base. The consequence is at the top, ten years out. Those who run companies in 2036 are the analysts who don't get hired in 2026, or the ones who do but never do the work that would have made them capable of running anything. Firms cutting junior headcount think they're saving money. They're mortgaging the next generation of seniors. The issues show up in 2032, when the head of the desk retires and the bench is lighter than it should be.
There are two complications to this theory that are also true.
The companies that don't deploy ai lose on cost and output and die. There's no version where companies don't embrace it if it can immediately lower costs. And so companies really have no choice right?
But this might also be temporary. New forms of apprenticeship may emerge that we can't see yet. Beane has written about what he calls inverted apprenticeship, where the senior learns from the junior who handles the new tools better. That might be the next form as it did in the past. Or it might not.
Polanyi spent the last decade of his life writing about why scientific progress couldn't be planned centrally. His argument was that real knowledge accumulates through a million small acts of judgment by people doing the work, none of whom can fully explain what they know. He thought this was the strongest argument against any system that tried to optimize the work away. He was writing about the Soviet Union. We are doing it to ourselves.
-- Gabriel