HALF MOON BAY, CALIFORNIA — What is the biggest risk posed by artificial intelligence (AI)? While many would point to the financial system, our attention would be better directed more toward labor markets.
Financial concerns are certainly understandable. Even in 2026, the specter of 2008 haunts every conversation about economic risk. When Lehman Brothers collapsed, and the global banking system teetered, governments faced a momentous choice: bail out the banks with public money or watch the financial system implode. In the United States, policymakers chose a bailout, encouraging future risk-taking and enraging taxpayers who bore the cost.
But U.S. regulators then spent the following decade building a new line of defense, which is now embedded in the global banking architecture. In the process, they offered a roadmap for addressing any systemic risks now accumulating within the AI industry.
To be sure, the Financial Stability Board (FSB) warns that regulatory frameworks designed to monitor AI are still in their early stages. But the risks remain manageable. The AI industry has arrived at a juncture that should look familiar to anyone who remembers the pre‑2008 financial system. Market concentration is extreme, the interconnections between major players are deep, and the industry’s critical infrastructure runs through single points of failure.
Before 2008, risk in the financial system was assumed to be widely distributed. It was not. Leverage was hidden in off-balance-sheet vehicles, counterparty exposures were opaque, and the failure of a single institution could cascade unpredictably through the entire system. Regulation was scattered across numerous entities, none of which had a complete picture of what was happening. Regulators had no framework for thinking about systemic risks, and no way to designate which firms would bring down others if they failed.
The AI industry has a similar concentration problem. According to Menlo Ventures, just three companies — Anthropic, OpenAI, and Google — control roughly 88 percent of the enterprise large‑language‑model market. And the hardware layer is even more concentrated, with TSMC completely dominating advanced-node semiconductor manufacturing, raising concerns about a potential global compute bottleneck. When a 7.4-magnitude earthquake struck Taiwan in April 2024, it temporarily disrupted semiconductor production and reminded the world how geographically concentrated this infrastructure has become.
Fortunately, the central innovation of post‑2008 financial regulation has proven effective: identify the institutions whose failure would be catastrophic, and mandate that they hold sufficient total loss‑absorbing capacity (TLAC) — equity and long-term debt that can be written down — to fail safely. The results are clear. A Congressional Research Service analysis of U.S. bank failures shows a sharp decline in failures following the post‑crisis regulatory reforms.
Although none of the tools introduced after the financial crisis translates directly to AI (banks hold financial assets that can be valued and stress‑tested, whereas AI systems rely on training data, model weights, and compute capacity), the underlying regulatory logic still applies. Regulators need only consider three adaptations.
The first is systemic designation and disclosure. Regulators and standard setters should identify which AI providers, cloud platforms, and chip manufacturers have become critical infrastructure for the financial system. The FSB’s October 2025 report on AI monitoring acknowledged that financial institutions are increasingly dependent on a small number of major technology providers for AI capabilities, but that monitoring efforts remain at an “early stage,” owing to data gaps and a lack of standardized taxonomies. Fixing that is the first step.
Second, operational resilience requirements should serve as a proxy for capital buffers. Instead of adhering to TLAC capital requirements, systemically important AI providers would have to demonstrate redundancy, failover capacity, and genuine substitutability. Financial firms relying on a single AI provider should face concentration limits analogous to the exposure rules that prevent banks from lending too much to a single counterparty. The FSB’s Third-Party Risk Management and Oversight Toolkit already provides a framework; regulators should use it more aggressively.
Source: Korea Times News