The AI Industry Is Living in a Bubble and the NYC Subway Proved It
March 3, 2026
Last fall, a startup called Friend.com spent $1 million plastering over 11,000 ads across New York City's subway system. The ads were stark, minimalist, and promised things like "I'll never bail on our dinner plans" and "I'll binge the entire series with you." Within days, New Yorkers had covered them in graffiti. "AI trash." "Surveillance capitalism." "Computers don't want to be your friend... they want your data." One person rewrote the definition of "friend" on the ad itself, adding that a friend is also "a living being."
The CEO, 22-year-old Avi Schiffmann, said the backlash was the plan. The white space in the ads was intentional. He wanted people to fill it in. "The audience completes the work," he said. "Capitalism is the greatest artistic medium."
Okay.
But there's something more interesting happening here than a startup being provocative on purpose.
The Friend campaign is the clearest picture I've seen of the gap between how people inside the AI world see this technology and how everyone else does. Schiffmann even said it directly: "People in New York hate AI probably more than anywhere else in the country." He knew that going in and spent $1 million on it anyway, because from inside the tech bubble, hostility reads as interest. Controversy is a growth strategy. The vandalism is content.
That logic makes sense in San Francisco. It absolutely does not make sense everywhere else.
The Numbers Are Not What the Feed Tells You
Spend enough time on X and you'd think everyone is racing to build agents, running multi-step prompts, and genuinely debating whether the latest model is smarter than a human. The AI discourse is relentless and mostly optimistic. It's easy to mistake that for how the general public feels.
It isn't.
A Pew Research survey from mid-2025 found that 50% of Americans say they're more concerned than excited about AI in daily life. Up from 37% in 2021. As the technology has gotten better and more visible, public opinion has gotten worse. YouGov tracked this in real time: the share of Americans who believe AI will negatively affect society went from 34% in December 2024 to 47% by June 2025. Six months. Nearly a 40% jump in pessimism, during the exact same period the AI industry was celebrating every new benchmark.
The expert-public gap is striking when you actually look at it. Pew asked both groups how they feel about AI's growing role in daily life. Among AI experts: 47% say they're more excited than concerned. Among the general public: 11%.
Same technology. Same moment in time. Completely different reality depending on whether you work in it.
The public's biggest fears aren't abstract. They're worried about losing human connection (57%), being replaced at work (56%), and getting fed misinformation they can't identify. A Gallup-Bentley survey found 77% of Americans distrust both businesses and government to use AI responsibly. Not mildly skeptical. Deeply distrustful. A YouGov poll from December 2025 found that only 18% of Americans would trust an AI system to make a decision or take an action on their behalf, even "somewhat."
These are not people who haven't caught up yet. These are people who have made a judgment.
SF Has Its Own Gravity
I work at Terminal X, where we build AI tools for institutional investors. My whole professional world is AI. The people I interact with most are excited, technically deep, and genuinely believe this technology is going to change how decisions get made at scale. I think that too.
But I also talk to people outside that world. Family, friends, people I meet at gatherings who don't spend their days thinking about context windows and agent frameworks. The picture is different. There's confusion, skepticism, and a specific kind of suspicion that the tech industry consistently underestimates: the feeling that AI is being done to people, not for them.
San Francisco doesn't experience this. The city has a self-reinforcing gravity around technology optimism that makes it genuinely hard to understand how most of the country processes this stuff. Friend could have put those same ads in SF and gotten a completely different reaction. Not because the product is better there, but because the audience already bought the premise that AI is inevitable and probably good. That's not most places. That's not most people.
The Friend campaign was designed by someone thinking in SF logic. It got answered by people thinking in NYC logic. Both reactions are real. Only one of them represents the majority.
What "I Don't Use AI" Actually Means
The thing that gets missed in the industry: most people who say they dislike AI aren't making a technical argument. They're making a values argument.
When someone graffitis "stop profiting off of loneliness" onto a subway ad, they're not saying the AI doesn't work. They're saying something about what they think human connection is for. When someone writes "AI is not your friend" on the Friend poster, they're not confused about how large language models function. They're pushing back on a framing they find insulting.
The 50% of people in the Pew data who say AI will worsen their ability to form meaningful relationships aren't being irrational. That's a reasonable read of where some of this is heading.
Where the skepticism goes too far is in treating all AI as one thing. A wearable pendant designed to simulate friendship is genuinely different from a tool that helps an analyst read 800 pages of SEC filings in 20 minutes. Both are "AI." Only one of them is threatening to replace something humans care about keeping.
The industry hasn't done a good job making that distinction. Every AI product gets lumped together in the public mind, which means every Character.AI lawsuit, every deepfake scandal, and every ambient-listening pendant that costs $129 bleeds into the perception of the whole category.
The Gap Is Going to Cost Something
I don't think the average person is going to warm up to AI because the benchmarks keep improving. The benchmarks are irrelevant to how someone feels about the technology showing up in their doctor's office, their kid's school, or their subway car promising to be their best friend.
What would actually move things is AI doing work people can see the value of, in domains they care about, without asking them to give up something they value in the process. Useful without being creepy. Powerful without being threatening. That bar is higher than the industry usually acts like it is.
The Friend campaign is going to be remembered as a moment that captured the 2025 AI vibe perfectly. Not the model releases or the agent demos or the benchmark news. A $1 million subway campaign that got vandalized within 48 hours by people who found the entire premise offensive.
The technology is real. The capability is real. The resistance to it is also real, and it's not going away because the models keep getting better.
The industry is very good at talking to itself. Getting better at that is probably not the most important thing right now.
-- Gabriel