BigPic Capital Research

The Crash, the Transition, and the Golden Age

A Response to CitriniResearch's "The 2028 Global Intelligence Crisis"

February 23, 2026
In response to: CitriniResearch & Alap Shah (February 22, 2026)
Section I

Why I'm Responding

On Saturday, February 22, 2026, CitriniResearch published "The 2028 Global Intelligence Crisis." By Sunday evening it had nearly 2,000 likes, 75 comments, and over 400 restacks on Substack. By Monday — today — markets sold off hard. The paper was cited on financial Twitter as a named catalyst.

The timing wasn't accidental. It landed into a market already bruised: IGV (the software ETF) down 23% year-to-date, CrowdStrike losing $20 billion in market cap in two days after the "Claude Code Security" announcement on February 20, alternative asset managers quietly bleeding from their highs (Blue Owl down 59%, KKR down 45%, Blackstone down 42% from all-time highs while the S&P 500 was off just 2%), and crypto cratering.

This is not a paper that needs to be right to matter. It already matters because it gave a name and a narrative to the anxiety that's been building since agentic AI tools crossed from demo to deployment.

"The Intelligence Premium Unwind" is a phrase people will remember. It deserved a serious look.

I deployed six research agents to stress-test every major claim against current data, covering hundreds of data points across SaaS disruption, agentic commerce, labor displacement, private credit contagion, mortgage risk, and fiscal policy. Then I spent hours developing my own framework for what comes next.

Here is what I found, and what I believe needs to happen.

Section II

The Core Thesis — and What It Gets Right

At its core, Citrini is making one claim:

Every institution in the economy — labor markets, mortgage underwriting, SaaS pricing, private credit, tax collection, consumer spending — was designed for a world where human intelligence was scarce. AI is making it abundant. They all break in the same direction, at the same time, for the same reason.

That is a powerful idea. The feedback loop at its center, where AI gets stronger, companies invest more in it, workers get displaced, companies invest even more, is not speculative economics. It's the composition fallacy applied to a new context: each company's rational decision to cut costs via AI is collectively catastrophic for consumer demand. Keynes called this the paradox of thrift. Citrini calls it the Intelligence Premium Unwind. The mechanism is the same.

And across every thread, the evidence is building.

Section III

What the Data Says: Walking Each Thread

What follows is my assessment of each of the paper's six phases, graded against current evidence. The full research is available in the supporting files; here is the synthesis.

A
Thread 1
SaaS Moat Destruction
This is happening now.

CNBC built a functioning Monday.com clone in under one hour using Claude Code for approximately $15. Monday.com has a $4 billion market cap. That demonstration wasn't theoretical; it was broadcast to millions. More importantly, 35% of enterprises have already replaced at least one SaaS tool with a custom AI-assisted build, and 78% expect to build more in 2026.

The reflexivity loop the paper identifies is playing out in real time: AI reduces headcount, fewer seats are needed, revenue declines, stock drops, cost-cutting pressure intensifies, more AI is adopted, more seats are cut. You can see it in the earnings calls. ServiceNow dropped 11.4% after admitting agentic workflows were "complicating growth visibility," down 43% from highs. Salesforce's forward P/E compressed from 31.67x to 15.02x in thirteen months. Across the sector, software price-to-sales multiples have compressed from 9x to 6x, levels not seen since 2015.

The scale of destruction is staggering. An estimated $1 trillion in enterprise software value has been wiped out. Apollo, seeing what's coming, cut its software lending exposure nearly in half. And the tools doing the destroying are growing faster than anything they're replacing: Claude Code went from $0 to $2.5 billion in annualized revenue in nine months. A Google engineer publicly stated that one year of Google's internal work was replicated in an hour.

The paper's strongest contribution here is identifying the per-seat pricing model, the bedrock of SaaS valuations for two decades, as structurally broken. Companies that once needed 500 licenses found they could achieve the same output with 50. That's not a downturn. That's a repricing of an entire sector.

The Nuance the Paper Misses

There's a bifurcation between long-tail, single-function SaaS (existentially threatened) and mission-critical enterprise platforms with deep data moats, compliance frameworks, and ecosystem lock-in (facing margin pressure but retaining structural advantages). Investors should treat these as fundamentally different risk profiles.

B+
Thread 2
Intermediation Collapse
Directionally right. Timeline aggressive. First casualties already visible.

The paper's best original framework is "habitual intermediation": business models built on consumer inertia, complexity barriers, and the friction of comparison shopping. AI agents eliminate that friction. If an AI can re-shop your insurance annually with zero effort, the passive renewal premium that insurers monetize evaporates.

The first confirmed kill: MoneySuperMarket, the UK's largest price comparison site, crashed 13% to a 13-year low in February 2026 after ChatGPT insurance apps launched. Their parent lost £144 million in market value in a single session.

The infrastructure for agent commerce is already live. OpenAI's "Buy it in ChatGPT" reaches 900 million weekly users. Shopify orders from AI searches grew 11x since January 2025. Morgan Stanley projects roughly half of online shoppers will use AI agents by 2030, accounting for approximately 25% of total spending. The incumbents know it: Expedia's 10-K filing explicitly names AI agents as an existential threat, and they earmarked $700 million for AI reinvestment. You don't spend $700 million defending against something you think is hype.

Where the Paper Overreaches

It treats every intermediary as equally fragile. They're not. The paper conflates information intermediation (easily disrupted) with physical logistics (hard to disrupt). You can price-match with AI, but someone still has to drive the food to your house. DoorDash reported record DashPass signups in 2025. Uber Eats delivery bookings rose 26% year-over-year. Insurance comparison? Already dying. Food delivery? Not anytime soon.

Similarly, the paper underestimates Visa and Mastercard's ability to adapt. These companies have successfully navigated every payments transition for decades. Visa launched the Trusted Agent Protocol and settled $4.5 billion in stablecoins via Solana. They're integrating the new rails, not being bypassed by them. And there's an irony: OpenAI charges merchants 4% on ChatGPT Instant Checkout — more than Visa's interchange fee. The intermediary may change, but the toll doesn't disappear.

B-
Thread 3
White-Collar Labor Displacement
Direction right. Pace probably 3–5 years behind the paper's scenario.

The directional thesis is supported, and the data has a pattern: the cuts are quiet, cumulative, and concentrated at the top. Job cut announcements hit 1.1 million in the first ten months of 2025, 65% higher than the prior year, the highest since 2020. White-collar postings fell 35.8% between Q1 2023 and Q1 2025. Hiring for roles above $125,000 dropped 32% year-over-year. And it's accelerating: 37% of business leaders plan to replace workers with AI by end of 2026, and 20% of large organizations plan to use AI to eliminate over half of middle management.

The people building the technology are saying it directly. Dario Amodei at Anthropic warned AI could eliminate "half of all entry-level white-collar jobs" and drive unemployment to 10–20%. Glassdoor describes a shift to "forever layoffs," frequent, small cuts below WARN Act thresholds that stay under the radar but are cumulatively devastating.

The composition fallacy is the paper's most important intellectual contribution: each company's rational decision to automate is collectively catastrophic for the consumer demand that sustains all of them. This is not speculative. It's well-established economics applied to a new context.

The Honest Assessment

Yale Budget Lab found "no significant differences" in employment for AI-exposed occupations through November 2025. The Dallas Fed found only 8–10% of businesses reporting that AI decreased their need for workers. Goldman Sachs projects only 2.5% of U.S. employment immediately at risk. The World Economic Forum projects 170 million new jobs created versus 92 million displaced by 2030, a net positive of 78 million.

We're either in the lag period the paper predicts, where savings buffers and delayed data mask the onset — or in the augmentation scenario where AI changes tasks rather than eliminates jobs. I genuinely cannot distinguish between these two right now. That's what makes this dangerous.

The savings buffer claim is highly plausible and under-discussed: a household earning $300,000 with $200,000 in savings and $500,000 in home equity could maintain their mortgage for five or more years after total income loss. This would absolutely mask distress in economic data.

A+
Thread 4
Private Credit Contagion
This is the thread where reality may be ahead of the paper.

This is the paper's most alarming claim, because current evidence isn't merely supporting it, it's outpacing it.

The private credit market has swelled to $2–3 trillion, with an estimated $600–750 billion in loans to software companies, loans underwritten against growth assumptions that AI is now systematically dismantling. The stress is showing everywhere. Distressed tech debt has reached $46.9 billion. A record $25 billion in software loans are trading below 80 cents on the dollar. BDC stocks are down 23%. Non-accruals have climbed to 2.7% and are rising. PIK usage sits at 5.3%, meaning borrowers are restructuring rather than paying cash.

Blue Owl's $1.4 billion fire sale on February 18, 2026 was a watershed moment. They sold loans at 99.7 cents to pension funds and their own insurance affiliate, then scrapped quarterly tender offers, trapping retail investors. Mohamed El-Erian compared it to BNP Paribas August 2007, the canary before the financial crisis. Senator Warren called it a "cockroach."

PHL Variable Insurance collapsed with a $2.2 billion shortfall after a decade of private equity ownership. Connecticut froze $400 million in payments. Regulators found "systematic efforts to hollow out the insurer through circular offshore reinsurance transactions." This isn't hypothetical. It happened.

The PE-insurer nexus is real and extensive: Apollo/Athene, Brookfield/American Equity, KKR/Global Atlantic — private equity firms acquiring life insurers, investing policyholder annuity deposits into their own private credit, and earning fees on both sides. US insurers have ceded over $1 trillion to offshore reinsurers in Bermuda and the Cayman Islands. The "permanent capital" argument that was supposed to make this safe just failed its first real test.

A Moody's/Harvard/SEC joint study confirmed that the financial network has shifted from "hub and spoke" (banks at center) to "a more distributed but denser web of connections, with private credit playing a central role" — and that opacity "amplifies panic."

UBS models default rates reaching 13% under an aggressive AI disruption scenario, and $12.7 billion in BDC maturities are due in 2026, a 73% increase over the prior year. The contagion mechanism is not theoretical: AI disrupts software, private credit defaults, losses hit PE-insurer balance sheets, opacity prevents assessment, regulatory tightening triggers forced sales, panic follows. Every link in that chain has current evidence.

C+
Thread 5
Mortgage Risk
Intellectually novel. Data doesn't support it yet.

The paper's most original analytical framework is the 2008-versus-2028 comparison: in 2008, the loans were bad at origination. In Citrini's scenario, the loans were good. The economy changed afterward. Prime borrowers with 780+ FICO scores, 20% down, verified income face structural income impairment from AI displacement. "People borrowed against a future they can no longer afford to believe in."

The mechanism is sound: mortgage underwriting assumes income continuity. Fannie Mae requires income to be "stable and likely to continue for at least three years." The models have no mechanism to discount for sector-wide structural displacement. Academic research on the "double trigger" hypothesis confirms it: mortgage default requires both negative equity and income loss, and structural unemployment is a far more important predictor than cyclical unemployment.

But the Data Doesn't Support It Yet

Overall mortgage delinquency is 4.26%, up modestly but still near historic lows. Conventional prime loans are at 2.89%. Current stress is concentrated in lower-income and FHA segments, not the prime cohort the paper targets. And the supply constraints the paper ignores are massive: San Francisco has 1.6 months of inventory versus a 6-month balanced market. The national shortage is 3–7 million units. Even significant demand destruction may not produce the price declines projected.

Austin is the one market closest to validating the thesis, declining 3.3% year-over-year, the only major tech hub showing consistent price drops. San Francisco experienced a 13.3% decline in 2022–2023 during tech layoffs and recovered. The supply constraint buffer is real.

B
Thread 6
Fiscal Crisis and Policy Paralysis
Structural analysis is the paper's strongest contribution. Timeline is too fast.

Individual income taxes and payroll taxes account for approximately 85% of federal revenue. RAND confirmed: individuals account for 84% of federal revenue. This is not debatable. The entire fiscal architecture is a bet on people having jobs and earning wages.

Labor share of GDP hit 53.8% in Q3 2025, the lowest since the BLS began recording in 1947. It dropped from 54.6% in a single quarter. The forty-year decline is accelerating. Brookings published a framework showing AI erodes both tax bases simultaneously: labor income and consumption, since consumption falls when wages fall.

The CBO, IMF, and RAND are all actively studying the scenario where AI displaces labor faster than the fiscal system can adapt. This is not fringe analysis. Current automatic stabilizers (unemployment insurance that lasts 26 weeks, assumes cyclical displacement, and excludes gig workers) are explicitly designed for temporary job loss, not permanent structural displacement.

Where the Paper Stretches

The paper's claim that labor share could drop from 56% to 46% in four years would require roughly five times the current pace of decline. That's a tail-risk scenario, not a base case. Municipal bond defaults were at historically low levels in 2025. State rainy day funds are at $164 billion, 2.5 times the 2007 level. The fiscal starting position is strained but not yet in crisis.

The political paralysis, however, is the most predictable element of the entire scenario. The U.S. political system has demonstrated — with climate, with healthcare, with fiscal sustainability — that it cannot act preemptively on slow-moving structural crises. It waits for the break. Then it reacts. There is no reason to believe AI displacement will be different.

Section IV

The Verdict: Probability Assessment

After walking all six threads against current data, here is where I land:

5–15% Full Crisis
25–35% Partial Crisis
35–45% Slow-Burn Transition
15–25% Paper Largely Wrong
Full crisis as described, on schedule S&P ~3,500, 10.2% unemployment by mid-2028. Everything goes wrong simultaneously at maximum speed. Tail risk, not base case.
Accelerated partial crisis SaaS destruction + private credit stress + recession, but contained. The most likely negative scenario. Policy response is late but arrives.
Slow-burn structural transition The Goldman/Dallas Fed/Yale view. Augmentation outweighs substitution. The economy adapts, painfully and unevenly, but without systemic break.
Paper largely wrong Requires adoption to slow significantly or new industries to absorb displaced workers faster than expected.
The direction is correct across all six threads. The speed is overstated by 2–4 years for most threads.
Private credit is the exception: it may be on or ahead of schedule.
Section V

What Citrini Misses: The Destination

Here is what gets lost in the doom scenario.

Citrini maps the disruption brilliantly — and then stops. The paper tells you what breaks. It doesn't tell you what to build. And it doesn't ask the question that matters most: is the destination actually bad?

I don't think it is.

An economy powered by AI agents and robotics, where people don't have to work jobs that cause physical suffering, mental degradation, and economic scarcity. Where intelligence is abundant. Where the baseline standard of living is higher than anything the current system can deliver. That's not a crisis. That's a golden age.

That's worth shepherding in.

The crisis Citrini describes isn't inevitable. It's what happens when leadership fails to manage the transition between here and there. The disruption is the path. The question is whether we walk it or fall down it.

Section VI

What Citrini Misses: The Meta-Risk

The paper treats policy failure as one of six threads, Phase 6, "The Battle Against Time." It's not one of six. It's the variable that determines whether the other five cascade into crisis or get managed through a transition.

Every issue Citrini identifies (SaaS destruction, labor displacement, private credit contagion, mortgage risk, fiscal pressure) has a policy response available. Compute taxes. Sovereign wealth funds. AI displacement insurance. Consumption tax reform. Corporate tax base broadening. Brookings, the IMF, RAND, and Anthropic have all published detailed frameworks. The policy menu exists. The kitchen is closed.

What doesn't exist is the political will to act before the crisis forces action. And that is the most predictable element of the entire scenario.

This is the K-shaped economy problem writ large. Capital's share of income keeps climbing. Labor's share just hit a record low. The top 10% of earners account for nearly half of all consumer spending, the highest share since 1989 and still rising. The divide widens, and a widening divide is what tears societies apart. Not technology. Division.

Citrini is writing about the crash.
I'm writing about the failure to prevent it.
Section VII

The Path Forward

Citrini ends with fictional legislation. I'll end with what real resolution looks like, because the point of identifying risks is to do something about them.

1
AI as a Governing Partner

AI-native leadership doesn't mean leaders who understand AI. It means leaders who use AI — openly, in front of everyone, as a thinking partner that challenges their ideas in real time.

Right now, politicians, media, and the public operate in echo chambers. Social platforms silo people into left or right, where they reinforce each other's opinions without any external force to push back, fact-check, or challenge. That's not governance. That's groupthink at scale.

AI should be in the room. In the debate. On the platform. Not as an authority — as a partner. A symbiotic relationship where AI challenges human ideas and humans challenge AI's outputs, and the quality of decisions improves because neither side operates unchecked. AI represents the culmination of collective human intelligence, getting smarter and checking itself. The value isn't that AI is always right. The value is that the conversation happens at all — openly, with something that has no political incentive to lie.

Imagine a political debate where AI fact-checks claims in real time. Where policy proposals get stress-tested against data before they become law, not after they fail. Where leaders who are wrong get challenged publicly and have to change the idea, not double down because their base applauds the error.

That's not AI regulating us. That's us finally using the most powerful tool we've ever built to make better decisions. The fact that we're not doing this already, that leaders are making AI policy without consulting AI, tells you everything about why the transition Citrini describes is being left unmanaged.

2
Distribute the Means of Production

The policy debate right now is about how to redistribute AI-generated wealth: compute taxes, sovereign wealth funds, UBI. All of those assume the same structure: a few companies own the AI infrastructure, generate enormous value, and the government skims a portion to hand back to citizens.

That's the wrong frame. Instead of redistributing the wealth that AI produces, distribute AI itself.

Compute is the new capital. If you own compute — a node, a GPU cluster, a robot — you own a piece of the infrastructure that runs the economy, and your returns come from what it produces. You're not a welfare recipient waiting for a government check. You're a business owner in the business of running America's AI backbone.

Think of it like Bitcoin mining, but for the real economy. Everyone contributes compute to a national AI network — just as miners contribute hash power to the blockchain. Your returns are proportional to your contribution. The network gets stronger with every participant. And unlike crypto mining, the output isn't speculative tokens — it's the actual AI services that power commerce, healthcare, education, logistics, and governance. Your compute does real work. You earn real returns.

The financing model has precedent. The same way the federal government created mechanisms for Americans to own homes — FHA loans, VA loans, Fannie Mae — it can create mechanisms for Americans to own compute. Call it the AI Homestead Act: productive capital distributed to citizens who contribute back to the system. The government provides the hardware. Your returns pay down the cost — like a mortgage, but on a machine that generates income from day one. For those who can't afford even the subsidized cost, the compute itself funds the program through what it produces.

There's a national security dimension that nobody is talking about. Right now, America's AI capability is concentrated in a handful of data centers owned by a handful of companies. That's a target. Distributed compute across the population is resilient by design. You can't take out a nation's AI if the AI is everywhere, in every household, every small business, every community. Decentralization isn't just an economic model. It's a defense strategy.

3
The Middle Way

The left calls it socialism. The right calls it redistribution. I call it the Middle Way, a concept older than either ideology, rooted in the simple observation that extremes destroy themselves.

Capitalism as currently structured has produced extraordinary innovation and extraordinary inequality. The wealth flywheel is real: once you reach a certain level, you access wheels of monetary generation that the poor cannot touch. The gap is self-reinforcing. It's like starting World of Warcraft five years after launch — the early players have permanently better gear, and no matter how hard you grind, you're never catching up. COVID made it worse. Forgivable PPP loans let the already-wealthy invest and expand during a crisis when the poor couldn't access the same capital. Another K-shaped expansion.

Socialism as traditionally proposed overcorrects in the other direction, centralizing control, eliminating incentive, and historically failing to deliver the abundance it promises.

The Middle Way takes the best of both. A market-driven system with a floor. Economically competitive, but nobody gets abandoned. Not splitting the difference between capitalism and socialism. Building something that transcends both.

Funding. The ultra-wealthy must contribute back. This isn't punitive — it's corrective. The redistribution already happened, and it went upward. SPACs, IPOs, structural advantages, and a financial system that rewards the financially educated while preying on those who aren't — it's been an unfair game, and fair is fair. A wealth tax on extreme concentration, combined with an AI displacement tax — companies that replace workers with AI pay proportional to the labor they displaced — funds the transition. Companies that automate the most contribute the most to the safety net their automation creates the need for.

The fiscal architecture also needs reform. 85% of federal revenue depends on people having jobs and earning wages — that's a bet on the permanence of human labor, and the bet is losing. Consumption-based taxation, compute taxes on the AI value chain, and sovereign wealth funds seeded by AI-generated returns all need to be part of the toolkit. Brookings, the IMF, RAND, and Anthropic have all published frameworks. But the most American answer to "where does the value go?" isn't a new tax. It's a new form of ownership, ownership of America. Where you are not a citizen, you are a shareholder benefiting from the growth of the country, not being left behind.

Phasing. Start with the sick and the elderly — the people in pain and suffering right now, who can't wait for the system to mature. Fold in Social Security reform, because Social Security is already broken. The government treats it like a piggy bank, spending the money elsewhere, leaving nothing for the people it was supposed to protect. Instead of pretending the current system will hold, build the new one. Healthy, working-age people shoulder the transition temporarily — they need to understand it's coming for them too, but they're physically able to do the work while the system scales. Then expand aggressively to universal coverage. Not a 20-year rollout. Fast.

Participation. This isn't free money. It's a social contract. To receive transition support, you participate: engage in education — including working with AI to learn how the new system operates — contribute productively, and meet basic health standards. This isn't about handing people a check and walking away. It's about building a system where everyone is growing. Whether that contribution is growing food, serving your community, creating art, or learning physics — it counts. The point is engagement, not extraction.

The floor. The minimum isn't just survival — it's survival plus access to intelligence. A clean, safe, modest place to live — not a mansion, not a full house, but a protected space with a bed, a kitchen, and unlimited access to AI and compute. If you want to paint for a living and contribute to society through art, philosophy, and creativity, the floor catches you. If you want to spend your time learning physics and become the next Einstein, the floor launches you. You won't live luxuriously. You'll live with dignity, and with access to the most powerful learning and thinking tools ever created.

Right now, art, music, and philosophy are punished by capitalism because they don't generate revenue. That framing is broken. A society that values only economic output isn't a society worth building. Art is value. Music is value. Learning is value. The Middle Way recognizes that, and protects the people who pursue it.

AI-assisted investment. As AI models get better at finance — and they will — institutional-grade investment guidance becomes available to everyone, not just the wealthy who can afford advisors. Instead of passive index funds or uninformed speculation, ordinary people navigate markets with the same intelligence hedge funds use. We move together, informed, like a flock of birds — coordinated not by herd behavior but by shared intelligence. This is another equalizer, another piece of the toolkit that makes the transition navigable.

4
Decentralize or Stagnate

There's an irony nobody is talking about. The United States — the country that calls itself the land of the free — is running its AI future through a handful of centralized, closed-model companies. Meanwhile, China — the country the U.S. calls authoritarian — is releasing open-source models that anyone can run on their own hardware. DeepSeek is open. Anthropic and OpenAI are closed. The "free" country is centralizing. The "authoritarian" one is distributing.

That needs to reverse.

If you've ever designed a network, you know the first rule: no single points of failure. You build redundancy. You distribute routing. If one node goes down, traffic routes around it. That's not just engineering; it's a philosophy of resilience that applies to economies, to governance, and to AI infrastructure.

Centralized AI is a vulnerability. A handful of data centers owned by a handful of companies is a target for adversaries, for regulatory capture, and for the concentration of power that the entire Middle Way framework is designed to prevent. Distributed compute, owned by citizens and spread across every community, can't be taken out. It's resilient by design.

And the hardware concentration argument is losing force. DeepSeek proved that software optimization and algorithmic innovation can achieve near-parity with centralized hardware advantages. Even Elon Musk acknowledges that throwing more hardware at AI produces diminishing returns. The future belongs to software, to open models, and to distributed networks that compensate for individual hardware limitations through collective scale.

The gap between the best closed model and the best open-source model is six to twelve months and shrinking. Once you hit a hardware ceiling, eventually everyone gets access to that hardware. The moat isn't compute. The moat is architecture. And decentralized architecture, a massive network of individuals all contributing to the national AI backbone, can overpower any centralized system.

Self-sovereignty is the natural human state. People move toward freedom, not away from it. If America builds this model (distributed compute, decentralized AI, self-sovereign participation), it becomes the attractor. Capital doesn't flee. It flows in. Because people and capital follow freedom.

5
Upgrade the Safety Net

The current safety net was designed for a different kind of disruption. Unemployment insurance that lasts 26 weeks, assumes workers will be rehired, and excludes freelancers and gig workers belongs to a manufacturing economy where layoffs were cyclical and recovery was expected. AI displacement is structural. It doesn't eliminate positions; it eliminates categories.

The replacement needs to match the scale of the problem. Universal eligibility regardless of employment type. AI-usage-based employer contributions, where companies that automate the most pay the most into the safety net they create the need for. Benefit durations tied to retraining timelines, not arbitrary cutoffs. And retraining programs that lead to roles that actually exist, not programs that retrain people for the jobs AI is about to take next.

Anthropic's "Automation Adjustment Assistance" proposal, modeled on Trade Adjustment Assistance and funded at approximately $700 million per year with scaling mechanisms tied to displacement pace, is a reasonable near-term template. It's modest enough to pass and structured enough to scale. It's a start, not an answer.

6
Act Now

This is the section that matters most.

Trying to predict the timeline is a trap. Every projection is based on current capability, but AI's own acceleration makes all timelines unreliable. Models get better, iterate faster, and once the recursive self-improvement loop closes, when AI begins building and optimizing itself — the curve becomes unpredictable from where we stand. Dario Amodei at Anthropic has spoken about this directly: the moment the loop closes, all bets on timing are off.

So the question isn't "when does this hit?" The question is: are we moving now?

I know the direction. I know the transition will be volatile. The longer we wait — the more we hang on to the past, the more we protect systems that are already failing — the more painful we make it for the people who can least afford the pain. Every day of delay is a choice to let the crash get closer while doing nothing to prevent it.

The transition requires active, competent management by people who understand what's coming. The destination requires the optimism to believe it's worth the effort. I have the latter. We need the former. And we need it now. Not after the break, not after the crisis forces action, but today. The conversation needs to start, the volume needs to rise, and the middle ground — where most people actually live — needs to unite and demand it.

Politicians are alignment machines. They align to whatever keeps them in power. If the collective voice is loud enough, focused enough, and informed enough — amplified by the very AI that makes all of this possible — they will turn. Social media has already proven it can block legislation through sheer collective pressure. AI makes that pressure smarter, more factual, and harder to spin. The mechanism for political change is the same technology driving the economic change. It's one integrated system.

This isn't about making anyone hurt. It's not about the wealthy paying an unfair share, even though they've received an unfair share. The question is simpler than that: who can sit down with this ultimately leveling technology, which is AI, and use it to navigate the path from where we are to where we're going?

That question needs an answer. And the answer needs to start now.

Section VIII

What I'm Watching

These are the signals that would change my assessment. I monitor them actively.

Already Flashing — Weekly Monitoring
1
Software loan trading prices. $25 billion below 80 cents and growing. Spread to non-tech sectors would confirm contagion.
2
BDC non-accrual rates. 2.7% now. Above 4–5% is crisis territory.
3
Enterprise SaaS net revenue retention. Below 100% means seat compression is exceeding upsells. The reflexivity loop is winning.
4
IGV ETF and software multiples. At 6x P/S. Further compression to 3–4x signals deep structural repricing.
Early Signals — Monthly Monitoring
5
White-collar job postings. Down 35.8% from Q1 2023. Further deterioration means acceleration.
6
Top-decile spending share. Record 49.2%. Deceleration means the consumption shoe is dropping.
7
AI-attributed layoffs as percentage of total. Currently roughly 5–6%. Above 15–20% confirms an automation wave.
8
BDC maturity refinancing outcomes. $12.7 billion due in 2026. Results will tell us a lot.
Medium-Term Signals — Quarterly Monitoring
9
Labor share of GDP. Record low 53.8%. Below 52% signals acceleration.
10
Payroll tax receipts to GDP. At 5.8–5.9%. Sustained decline below 5.5% confirms structural erosion.
11
Prime mortgage delinquencies in tech metros. At historic lows. Any uptick provides early validation.
12
HELOC draws and 401(k) hardship withdrawals among high-FICO borrowers. The "invisible stress" indicators.
13
Insurance regulatory actions on PE-affiliated insurers. NAIC changes taking effect.
Structural — Annual Monitoring
14
GDP-GDI divergence. If GDP (output) diverges upward from GDI (income), the "Ghost GDP" effect is real.
15
Federal revenue as percentage of GDP. Deviation below CBO projections confirms the fiscal scissors.
Section IX

Final Word

Citrini wrote the crash scenario. I'm writing about the failure that makes it possible, and the golden age that makes it worth preventing.

The structural vulnerabilities are real. My research confirms the direction across every thread. The feedback loops have no natural brake. Private credit contagion is ahead of schedule. The fiscal architecture is a bet on human labor that AI is actively unwinding. Labor share just hit a record low. These are facts, not forecasts.

But the crash is not inevitable. It's what happens when nobody acts. When leadership is absent, when policy is reactive, when the people managing the transition don't understand the technology driving it.

The solutions exist. They are not abstractions. AI can serve as a governing partner, symbiotic, transparent, challenging human decisions in real time. Compute can be distributed so that every citizen is a stakeholder in the AI backbone, not a passive recipient of redistribution. The Middle Way can deliver a market-driven economy with a floor, funded by corrective wealth taxes and displacement contributions, phased from the most vulnerable to universal coverage. Infrastructure can be decentralized, resilient by design, secure by architecture, open by principle. And the floor itself can become a launchpad: not just shelter, but unlimited access to intelligence. The next Einstein. The next artist. The next breakthrough that capitalism would never have funded.

None of this is hypothetical. The policy frameworks exist. The technology exists. The philosophical framework — the Middle Way, the opposite of both extremes, has existed for millennia. What doesn't exist is the collective will to demand it.

That has to change. Not next year. Not after the crisis. Now.

The destination — a golden age where intelligence is abundant, where people are freed from degrading labor, where the floor of human existence includes unlimited access to the accumulated knowledge of our species — is worth every ounce of effort it takes to get there. The question has never been whether we arrive. The question is whether we walk there or fall.

The canary is still alive. The cage is starting to rattle.
And we have everything we need to open the door, if we choose to act.

The conversation starts here.