Don't You Forget About Us: Gen X, AI, and 40 Years of Killing Jobs with Technology

Don't You Forget About Us: Gen X, AI, and 40 Years of Killing Jobs with Technology
“We were the ones doing it.”

Disclosure: This reflects my personal experience and interpretation of publicly available information. It represents my views alone—not any employer or organization—and is not professional advice.

TL;DR: AI didn't invent job displacement. The IT services industry already did it—at scale, across decades—through standardization, consolidation, and outsourcing. I know because I was one of the people doing it. What's different now isn't the economics or even the speed. It's the uniquely visceral reaction AI is provoking—about jobs, about trust, about reality itself—at a moment when the environment couldn't be worse for it. The AI revolution isn't unprecedented. The reaction to it is.


Everyone is acting like AI just kicked off a brand new era of job displacement.

It didn't.

We've been here before. Most people just didn't see it happening.

I did.

You know that moment in The Lord of the Rings where Elrond says, "I was there, three thousand years ago"? That's basically me right now, except instead of watching Isildur refuse to destroy the One Ring, I was watching enterprise IT dismantle entire departments while everyone outside the room thought "digital transformation" meant getting a new website.

I was there. I did the work. Across multiple companies, multiple decades, and more client organizations than I can count. And what I'm watching unfold right now with AI is familiar in almost every structural respect.

The new part is how people are reacting to it.

The Playbook

When I was at IBM, a core part of my job was identifying roles inside customer organizations that could be automated, consolidated, or eliminated through better systems and infrastructure. That wasn't theoretical. That was operational work, tied directly to real headcount reductions. I sat in rooms with decision-makers, mapped their workflows, and showed them where the fat was. Where three people were doing a job that a properly configured system could reduce to one. Where an entire department existed to manage a process that shouldn't have existed in the first place.

That was the job. And I was good at it.

But here's the thing people forget: while IBM was quietly doing this work behind the scenes, the company was also publicly demonstrating AI capabilities on national television. And nobody connected the dots.

In 1997, Deep Blue beat Garry Kasparov. It was treated as a novelty—man versus machine, great television, interesting thought experiment. People watched, they were impressed, and they moved on.

In 2011, Watson went on Jeopardy and destroyed Ken Jennings. That wasn't chess. Chess is brute-force computation. Watson was processing natural language, parsing ambiguity, weighing context, and making judgment calls in real time against the best human player in the game's history. That's not a calculation. That's comprehension.

It was scary impressive. Everyone who watched it could feel it.

But the reaction wasn't panic. It was fascination—followed by dismissal. Because everyone also realized Watson filled an entire room, cost a fortune to operate, and wouldn't be viable for broad adoption for probably twenty years.

Look what time it is.

Watson was on Jeopardy! in February of 2011. It's been exactly fifteen years. The machines don't fill rooms anymore. They fit in your pocket. They're on your laptop. They're embedded in the tools you use every day. IBM put the future on prime-time television, and the world treated the warning shot as a party trick.

And here's a detail that almost nobody remembers: Watson ran on a cluster of ninety IBM Power 750 servers running SUSE Linux Enterprise Server 11, with fifteen terabytes of RAM. Ninety servers. An entire room of hardware running an open source operating system that, in 2011, most of the viewing audience had never heard of. Linux was something only the nerdiest of nerds knew or cared about—used mainly on supercomputers and in university labs. The AI that beat Ken Jennings on national television was powered by software the viewers didn't know existed and ran on hardware they couldn't have fit in their house. Today, Linux and open source are the backbone of virtually every AI training cluster, every hyperscale data center, every cloud platform that matters. Another piece of the foundation hiding in plain sight.

The displacement work wasn't unique to IBM. I went on to Microsoft, where I witnessed an entirely different phase of the same disruption—one I'll come back to. Then to Unisys. Then to Dimension Data, which became NTT. Different companies, different decades, different client bases. The playbook was the same everywhere.

Because it wasn't a proprietary methodology. It was the thing. Every major systems integrator was running the same motion, because that's what the economics demanded. They competed fiercely for the same contracts, but they were all selling the same transformation.

There was a very clear sequence.

First, you standardized processes—walked into an organization, removed the variability, tightened workflows, made everything predictable enough to systematize. The selling language was "best practices" and "operational maturity." The operational reality was: we're making your work repeatable enough that it no longer requires the people currently doing it.

Then you consolidated infrastructure. Fewer systems. Fewer platforms. Fewer environments to manage. This was usually the phase where clients got excited, because the cost savings on the technology side were immediate and visible. What was less visible—at least at first—was that consolidation didn't just eliminate servers. It eliminated the teams that supported them.

Then you reduced headcount. Sometimes explicitly, more often framed as "natural attrition" or "organizational restructuring." But the outcome was the same. People left. Positions closed. Departments shrank. The systems we built were designed to require fewer humans to operate.

And finally, you outsourced what remained. Once the work had been standardized and cleaned up—made repeatable and measurable—it could be moved. Offshore operations centers. Lower-cost labor markets. The work still got done. It just got done somewhere else, by someone cheaper.

Standardize. Consolidate. Reduce. Outsource.

It played out across industries and decades. And it eliminated jobs. A lot of them.

Where It Happened

Large banks were the bread and butter. Think about what banking operations looked like before IT transformation gutted and rebuilt them. Back-office processing was enormous—loan origination, check clearing, reconciliation, regulatory reporting, compliance documentation. Entire floors of people doing structured, repeatable knowledge work. We'd standardize their processes onto unified platforms, consolidate from dozens of legacy systems down to a handful, and the math did the rest. You didn't even have to make the case for headcount reduction. Once the new system was live, the old roles didn't have enough work to justify their existence. The client could see it on a spreadsheet before we finished the implementation.

The numbers bear this out. U.S. bank branches peaked at over 90,000 in 2009. By 2022, more than 20,000 had closed—a decline of roughly 23%. And behind each closed branch were the back-office consolidations that preceded it: the processing centers, the compliance teams, the operations staff that got absorbed into systems long before the lights went out in the lobby.

Insurance companies were just as ripe, and more instructive. Claims processing, underwriting, risk assessment—those are judgment-based roles. Adjusters evaluated circumstances, interpreted policy language, weighed competing variables. Underwriters made decisions with real financial consequences. This wasn't data entry. This was thinking work.

And it still got systematized.

We didn't eliminate judgment entirely. But we narrowed the scope of it. We turned wide-open assessments into guided workflows. We codified the decision trees that experienced adjusters carried in their heads and made it possible for less experienced—and less expensive—people to handle the volume. The seasoned experts didn't all get fired. But you needed far fewer of them. And the ones who remained were managing systems as much as they were making decisions.

That's worth sitting with. The anxiety people express about AI right now—that it's going to automate thinking, not just tasks—is describing something that already happened in industries like insurance twenty-five years ago. We just didn't call it that. We called it "process improvement" and gave each other performance reviews.

Then there was the biggest retailer in the country, who will remain nameless, though everyone reading this knows exactly who I'm talking about. That company didn't just use the playbook—they became it. Their entire competitive advantage was logistics optimization, supply chain automation, and workforce efficiency at a scale nobody had seen. The public narrative has always focused on wages and working conditions. What rarely gets discussed is the IT-driven consolidation behind the storefront—the systems-level decisions that determined staffing levels in distribution, inventory, and procurement long before anyone on a shop floor felt the impact.

And it wasn't just the giants. Even SMBs bought into the cycle. The displacement engine filtered all the way down.

Across all of these clients, the thing that strikes me in retrospect is how rarely anyone called it what it was. The conversations never started with "we want to eliminate jobs." They started with "we want efficiency." They wanted "modernization." The headcount reduction was the quiet outcome of a process framed, from the very first meeting, in language that made it sound like progress rather than loss.

That language was insulation. And it worked. I know because my generation helped write it. We were the ones in the room drafting the slide decks that said "workforce optimization" when everyone knew it meant layoffs. Gen X didn't just witness the displacement era. We were its middle management.

The Government

I'm giving this its own section because it deserves one.

The popular assumption is that government is where jobs don't disappear. Bureaucracy is permanent. Nobody gets fired.

That's not what happened.

Federal IT modernization has been restructuring and reducing roles for decades. The contractors doing that work—IBM, Unisys, the big integrators—were running the same playbook. The only difference was procurement overhead and compliance requirements. The outcome was the same: fewer people, more systems, lower cost.

But it went further than that.

At Unisys, I built a system for a large government agency that automated fraud detection. That work had been done by human reviewers—people who analyzed data, flagged anomalies, and made judgment calls about what warranted investigation. The system I built replaced them. And it didn't just match what the humans were doing. In many ways, it exceeded them. Automated systems can process volumes of data at a speed and scale no team of human reviewers could touch. They can cross-reference, pattern-match, and flag at a throughput that simply isn't possible with manual review.

If most people understood how much of their personal data was being ingested, analyzed, and acted upon by automated systems—in cases where human beings used to be the ones doing that work—they would freak out. That's not speculation. That's just the reality of what these systems were designed to do. The humans were removed from the loop, and the systems took over not just the workload but the decision-making authority that came with it.

And that was the part the public could, in theory, have learned about.

There were other systems—classified systems—doing automated data analysis and collection at a scale that would have given conspiracy theorists pause. I was close enough to that world to know that what eventually became public was a fraction of what was there.

When Edward Snowden blew the lid off mass surveillance programs, the public reacted with shock. It was treated as a revelation—a sudden exposure of capabilities nobody had imagined.

But what Snowden exposed was really only the tip of the iceberg of an apparatus that had been under construction since the 1970s. The automated collection, analysis, and flagging of data at scale wasn't something the intelligence community invented in the post-9/11 era. It was something they'd been building, iterating on, and expanding for over thirty years before Snowden ever made headlines.

The people inside that world weren't shocked. They already knew.

And there's something pointed about the current moment in government specifically. The conversation around federal workforce reduction is happening out loud now, as a public performance, in ways that previous rounds of consolidation never were. The underlying impulse—use technology to reduce headcount—isn't new at all. But doing it on camera is.

The Waves

The displacement I've been describing didn't happen all at once. It came in waves, each one building on the last, each one a little more visible than the one before.

The outsourcing wave of the late 1990s and 2000s was the first to break the surface. There were call center jokes, political rhetoric about "shipping jobs overseas," election talking points. For the first time, the public could see the displacement happening. And the scale was staggering—during the 2000s, U.S. companies cut roughly 2.9 million domestic jobs while employing 2.4 million outsourced workers internationally. IT producing industries alone shed 600,000 jobs between 2000 and 2002. Manufacturing lost 5.7 million jobs over the decade.

But the backlash was contained. It never reached the emotional register of what we're seeing with AI. The reason is straightforward: outsourcing still had a human face. The work moved, but it moved to people. Someone in Manila or Bangalore picked up the phone. Someone in Pune processed the claim. The labor was displaced geographically, but it was still recognizably human labor. You could be angry about the economics, but you weren't confronting the possibility that the human had been removed entirely.

There's a psychological floor under outsourcing that doesn't exist with AI. When your job gets sent overseas, it's a resource allocation problem. When a system starts doing your job without any human on the other end, it's something else. It's not "someone cheaper can do this." It's "maybe no one needs to do this at all."

Then came the cloud.

At Microsoft, I had a front-row seat to early cloud adoption—and to the resistance it provoked. IT departments, infrastructure teams, system administrators—people whose entire careers were built around managing on-premises environments. Servers they could see. Hardware they could touch. Networks they controlled end to end. The cloud wasn't just a new delivery model. It was an existential threat.

They were right to be worried. But Microsoft wasn't just offering a better product and waiting for the market to come around.

Microsoft was pushing a licensing model for Azure and Office 365 that effectively made it impossible to refuse. The company wanted out of the old on-premises licensing structure—seat counts, core counts, all the legacy models that kept customers anchored to their own data centers. The new cloud licensing was designed to make staying on-prem economically irrational. And when hosting providers, SMBs, and enterprises pushed back—when they said they didn't want to lose their existing licenses and the control that came with them—Microsoft had a hammer: compliance audits. The threat of an audit, with the potential for massive true-up costs on under-licensed environments, eventually drove most holdouts into submission.

It wasn't a free market transition. It was a managed one. And this isn't just my recollection—licensing changes Microsoft introduced made it up to four times more expensive to run Windows Server outside Azure than inside it. AWS has estimated that half of the workloads running on Azure would move to competing clouds if the licensing costs were fair. Both the UK's Competition and Markets Authority and the European Union have opened investigations into these practices. The compliance audit playbook I saw from the inside is now a matter of regulatory record. And it's worth noting: the most aggressive enterprise software company currently embedding AI into everything—to considerable resistance from its own customer base—is, in fact, Microsoft. The playbook changes. The company doesn't.

SaaS, PaaS, IaaS—these weren't just acronyms. They were the tools that dismantled on-prem IT as a career category. Every application that moved to SaaS was a server that didn't need to be maintained. Every workload that migrated to IaaS was an infrastructure team that got smaller.

And the hyperscalers—AWS, Azure, Google Cloud—proved they could build a better data center than any company could build for itself. Not marginally better. Dramatically better. Better redundancy, better security, better uptime, better economics at scale. The argument for running your own infrastructure went from "of course we do" to "why would we?" in about a decade.

On-prem IT lost a staggering number of roles. And unlike the earlier waves—where displacement was buried in language and phased over long timelines—cloud migration happened fast enough that people could feel it coming. That's where the visceral fear really started. Not abstract concern about efficiency. The gut-level recognition that the ground was shifting.

But here's what almost nobody talks about, and it's critical:

The cloud didn't just displace jobs. It built the foundation for everything that followed.

And there's another thread that almost nobody connects to AI: the video game revolution. In the early 1990s, companies started building dedicated graphics processors to render 3D environments for games. Nobody designing GPUs for Quake and Half-Life was thinking about neural networks. They were thinking about frame rates. But the parallel processing architecture that made real-time 3D graphics possible—thousands of small cores executing calculations simultaneously—turned out to be exactly what machine learning needed. The hardware that made your teenager's gaming rig hum is the same architecture now training the models that everyone's panicking about. Another case of a technology built for one purpose enabling something its creators never imagined.

The hyperscale data centers that made on-prem obsolete are the same facilities that now house the AI factories. And the processors inside them—the GPUs, TPUs, and custom accelerators—trace their lineage back to hardware designed to make video games look better.

AI didn't spring from nothing. It was built on top of the cloud. The cloud was built on top of the consolidation and outsourcing playbook. That playbook was built on decades of enterprise IT transformation. The hardware evolved from the gaming and graphics revolution. And the open-source software ecosystem that nobody outside of tech took seriously until it was already running everything—that became the connective tissue holding all of it together. Every layer depended on the one before it. Pull any one of them out, and AI as we know it doesn't exist.

It's one continuous arc. And I've been inside it for most of my career.

What's Actually Different This Time

Some people will read everything I've just described and say: "Sure, but this time is structurally different. AI is faster, broader, touching more domains simultaneously. The historical comparison doesn't hold."

They're half right. The speed and scope are genuinely new. But that argument actually reinforces the thesis rather than undermining it. AI is faster and broader because it's built on top of every wave that came before it. The cloud gave it scale. The outsourcing era gave it the global labor arbitrage that made training data cheap. The IT consolidation playbook gave it standardized, systematized processes to optimize. The GPU revolution gave it the hardware. Open source gave it the software ecosystem. AI didn't just arrive. It was assembled, layer by layer, over four decades. The reason it's so powerful now is precisely because of the infrastructure the previous waves built.

So yes—the impact will be bigger this time. But the structural economics haven't changed. The incentives are the same. What's different is how it feels.

Visibility. The earlier waves happened inside enterprises, buried under NDAs and euphemisms. AI is happening in public. You can open a browser and watch it work. Every product launch is a live demonstration of what it can do—and implicitly, of what it might replace.

Authenticity. IT systems didn't blur the line between human and synthetic output. A database didn't pretend to be a person. AI does. When a system writes, draws, or speaks in ways that resemble human output, people instinctively ask: What am I actually looking at? That uncertainty feeds reactions that are less about capability and more about trust—"this feels fake," "this is slop," "I don't like this"—even when the output is objectively useful.

We've seen this before in media. When CGI replaced practical effects, the reaction was quality critique—this looks fake. The shift came when digital recreations of Peter Cushing and Carrie Fisher in Rogue One crossed into something more unsettling. The reaction wasn't "this looks fake." It was: this shouldn't exist. That's the line AI is now crossing. Not questions of quality. Questions of legitimacy.

Identity. Earlier waves targeted processes—clerical work, infrastructure, repeatable tasks. The roles that disappeared were real, and the people who held them mattered. But the work itself wasn't typically something people built their identity around. Not many people defined themselves by their ability to process check clearings or manage server environments.

AI is different. It's touching writing. Design. Analysis. Music. Decision-making. Creative strategy. These are domains where professional identity is inseparable from the craft itself. A writer doesn't just do writing the way an operator runs a system. The work is the identity. The skill is the self.

When a system can approximate that output—even imperfectly—it doesn't just raise a labor market question. It raises an existential one. Not "will I still have a job?" but "does what I do still matter?"

That's a question outsourcing never forced anyone to confront. Your job might move to Bangalore, but nobody questioned whether the work itself had value. AI introduces exactly that uncertainty—in the domains where people are most psychologically invested in the answer.

So the reaction isn't just economic. It's personal.

The Other Side of AI

While the public debate fixates on threat, machine learning has been quietly doing some of the most remarkable work in cultural preservation, science, and medicine in history. And the gap between what this technology is actually accomplishing and how it's being perceived may be the most important disconnect in the entire conversation.

When Peter Jackson made Get Back, his 2021 Beatles documentary, the original sessions had been recorded with mono audio. Decades of music, conversation, and creative breakthroughs were buried under noise and technical limitations no conventional restoration could fix. His team developed a machine learning system called MAL that separated individual instruments, vocals, and speaking voices from a mono mix—producing clarity that hadn't existed even when the recordings were first made.

That technology then made possible something that had been impossible for decades. The Beatles had an unfinished song—"Now and Then"—based on a rough demo Lennon recorded in his New York apartment around 1977. They'd tried to finish it in 1995, but his vocals were buried under piano and tape hiss too dense to separate conventionally. The ML system extracted Lennon's voice with startling clarity. McCartney described hearing it as an emotional moment. The surviving Beatles built a finished recording around it, incorporating George Harrison's guitar from the abandoned 1995 sessions. Released in November 2023, it won a Grammy—the first AI-assisted recording to do so. The last Beatles song, made possible only because machine learning could do what no human engineer could.

Nobody called it slop. Nobody said it shouldn't exist. The reaction was gratitude.

The Vesuvius Challenge is using machine learning to read scrolls sealed since 79 AD—papyrus from a library buried when Vesuvius erupted, carbonized and so fragile that unrolling them destroyed them. For 270 years, the only surviving library from the classical world sat unreadable. In 2023, researchers combined X-ray tomography with ML models trained to detect ink on carbonized papyrus. By 2024, a team of students had decoded over 2,000 characters—the first passages read from these texts in two millennia. A philosophical treatise on the nature of pleasure, lost for twenty centuries and recovered by machine learning.

Then there's medicine.

For fifty years, scientists had struggled to predict how proteins—the molecular machines that make living things work—fold into their three-dimensional structures. A protein's shape determines its function, and understanding that shape is essential for developing drugs and understanding disease. Experimental methods took months or years per protein and cost hundreds of thousands of dollars.

In 2020, DeepMind—the same AI lab that would later become the foundation for Google's Gemini—unveiled AlphaFold2. It predicted protein structures with near-experimental accuracy in minutes. Within two years, it had mapped virtually all 200 million known proteins and made that entire database freely available to the scientific community. Demis Hassabis and John Jumper won the 2024 Nobel Prize in Chemistry. Over three million researchers in 190 countries now use AlphaFold, accelerating work on everything from antibiotic resistance to cancer treatment to enzyme design.

That's not speculative. That's not "potential." That's a Nobel Prize–winning scientific breakthrough driven by AI, already transforming how medicine and biology operate.

AI is also transforming drug discovery. The GLP-1 medications—Ozempic, Wegovy, Zepbound—were developed traditionally through years of clinical trials. Now AI is accelerating every phase. Eli Lilly uses digital twin technology to optimize manufacturing for Mounjaro and Zepbound. ML platforms are designing next-generation GLP-1 compounds with half-lives three times longer than current drugs, identifying thousands of peptide candidates in days instead of years.

This is the same fundamental technology—machine learning, pattern recognition, neural networks—that people dismiss as "slop" when it generates an image. But when it predicts protein structures, we give it the Nobel Prize.

If machine learning can extract a dead man's voice from a degraded tape and the world celebrates—if it reads sealed scrolls, wins the Nobel, and designs drugs that help millions—but a generative AI writes a paragraph and the world recoils—what's driving the reaction?

It's context. When AI recovers something lost, it feels additive—it's giving us something we didn't have. When it generates something new in a domain that used to be exclusively human, it feels subtractive—it's doing something we thought was ours. When it saves lives, we call it progress. When it writes a cover letter, we call it a threat.

The technology is the same. The emotional valence is opposite.

The Trust Crisis

And all of this is landing in the worst possible environment.

There's a growing belief that AI is fundamentally dangerous—that removing humans from decisions will lead to catastrophic errors, even loss of life. It's a reasonable concern. But it rests on an assumption worth scrutinizing: that human involvement is inherently a safeguard.

I built the system that replaced human fraud reviewers. Those reviewers made mistakes. They had blind spots, biases, fatigue. They couldn't process a fraction of the data. My system wasn't perfect—but the idea that the humans were a reliable safety net isn't something the evidence supports. "Put a human back in the loop" isn't the simple fix people imagine. We've been removing humans from high-stakes decision loops for decades. In many cases, outcomes improved.

Then there's misinformation. Yes, generative AI can create convincing fake content—images, video, audio, text—that fools people into believing something happened that didn't. That's real, and it's a legitimate concern.

But the deeper problem is the inverse. When synthetic content becomes good enough to be indistinguishable from real content, it doesn't just mean fake things can look real. It means real things can be dismissed as fake. A genuine video of a politician saying something reprehensible can be waved away—"that's a deepfake." A real photograph can be questioned. A real document can be cast as fabricated. AI-generated content doesn't just pollute the information supply. It provides a universal alibi for denying anything inconvenient.

That's not hypothetical. It's already happening.

And it's happening at the worst possible time. Public trust in institutions—government, media, science, expertise—was already at historic lows before generative AI arrived. The political environment is toxic. Polarization has turned basic factual questions into tribal affiliations. People were already primed to disbelieve anything that challenged their worldview.

AI didn't create that environment. But it's landing squarely in the middle of it, handing people powerful new tools and new language to justify the distrust they already felt. Every previous wave of automation had the benefit of institutional credibility—even if that credibility was sometimes undeserved. AI doesn't have that cushion. Large segments of the population don't trust the companies building it, don't trust the government to regulate it, don't trust the media to report on it accurately, and don't trust experts to assess its risks honestly.

The reaction isn't just about the technology. It's about the technology in context—a context of pervasive distrust that AI is amplifying, not creating.

Where This Goes

The vendors are different—OpenAI instead of IBM, Anthropic instead of Unisys—but the motion is the same. Technology improves. Processes get leaner. Roles shift. Some disappear.

AI will change jobs. The speed is faster, the scope broader, and anyone who says otherwise isn't paying attention. But the people acting like this is the first time technology has come for jobs aren't paying attention either.

We've been doing this for forty years. I've been doing it for most of my career.

And when I say "we," I mean a specific group of people who've been largely absent from this conversation.

There's an entire generation—my generation—that was in the building when the building got rewired. Gen X built the systems that automated the first wave of jobs. We implemented the outsourcing playbook. We were at Microsoft and Amazon and IBM when the cloud was born. We watched the dot-com boom and bust. We deployed the infrastructure that AI now runs on. We have more direct operational experience with technological displacement than any generation currently in the workforce.

And we've been quiet.

The loudest voices in the AI conversation belong to the builders who are selling it, the policymakers who are regulating it, and the millennials and Gen Z workers who are encountering displacement anxiety for the first time and processing it in public. What's missing is the voice of the people who've actually been through this before—multiple times, from the inside.

The oldest of us have maybe five to ten years left in our tech careers. We're still useful—but at this point, our best asset isn't our hands on the keyboard. It's our experience. The pattern recognition we've built over decades of doing this work. The institutional memory of how displacement actually plays out—what the playbook looks like, where the real impacts land versus where the fear lands, what the euphemisms sound like when they start, and what the org chart looks like when they're done.

That experience is going to age out of the workforce. It's happening now. And if nobody captures it, the next generation will navigate AI-driven transformation without the benefit of knowing how the last four waves actually went. They'll make the same mistakes, believe the same hype, and miss the same patterns—because the people who could have told them were never asked to speak up.

Consider this me speaking up.

What's different about AI is the environment. Automation used to happen around you. Now it feels like it's happening to you—or instead of you.

It's visible. It's personal. It touches identity. It blurs the line between real and synthetic. And it's unfolding in a moment of deep institutional distrust where there's no credible referee and no shared agreement on basic facts.

The same technology that lets us hear Lennon's voice, read scrolls sealed for two millennia, solve one of biology's hardest problems, and design the next generation of life-saving drugs—that same technology is generating images and text that people can't distinguish from reality. The fear and the wonder come from the same place.

The AI revolution isn't unprecedented.

The reaction to it is.

And the generation that could have told you that—the one that was in the room for every wave, that built the systems, wrote the euphemisms, watched the jobs disappear, and then helped build the infrastructure that made the next wave possible—that generation has been sitting this whole time quietly, waiting for someone to ask.

So before we age out, before the institutional memory walks out the door and the pattern recognition retires to a condo in Boca Raton—where, fittingly, IBM built the first PC in 1981 and kicked off this whole mess in the first place—don't you forget about us.

We've seen this movie before. And we know how it ends.

Read more