OXFORD—“Something Big Is Happening,” wrote artificial intelligence (AI) startup founder Matt Shumer in a recent viral essay that captured his industry’s swelling confidence that the technology will power the next great productivity boom. So far, the economy has not played along. In fact, since slowing sharply in the 1970s, U.S. productivity has experienced only one brief burst of growth: the computer age. Output per hour surged by roughly 3 percent per year in the late 1990s and early 2000s, and then it petered out.

Could AI be different? Optimists point to headline labor productivity, which grew at a 1.8 percent annualized rate in the fourth quarter of 2025. But a cleaner measure by the U.S. Federal Reserve Bank of San Francisco, which strips out cyclical intensity (the effect of simply running people and machines harder), shows that labor productivity grew just 0.2% year on year. That is hardly suggestive of “something big.”

On the contrary, we would be fortunate to see the technology match even the short-lived computer revolution. Productivity growth will likely underwhelm, not because the technology is weak, but because it automates something fundamentally different from what the personal computer and the internet did. More to the point, AI creates a bottleneck that earlier digital tools largely avoided.

Consider what the computer revolution actually automated — faster calculation and access to knowledge. PCs, email, spreadsheets, and the web removed friction from the process of finding, storing, and transmitting information. A researcher who needed a source no longer had to search in a library or wait for it to arrive by mail. The productivity gains were relatively straightforward because humans could simply substitute the faster method (Google) for the slower one (a library). Information found online was the same as what you would have found on a shelf.

Crucially, when computers did perform core work, they did it deterministically. A spreadsheet could propagate bad inputs, but it did not invent arithmetic. Search engines could surface irrelevant material, but they did not fabricate sources. The principal risk was human error, not persuasive invention.

AI automates something different: the production of cognitive outputs themselves—from writing to coding. It often performs these tasks quite well. But because it can also be confidently wrong in ways that look plausible, it creates a tension that those navigating the computer revolution never faced: if humans need to remain in the loop to verify AI outputs, they will still need the domain knowledge that AI is supposedly substituting for. Ensuring reliability still requires scarce expertise and time. Thus, some of the time saved in generation is partly — and sometimes entirely — offset by the time spent reconstructing the reasoning, testing the claims, and taking responsibility for the result.

A Manhattan bankruptcy court provided the latest illustration of this problem just this month. Sullivan & Cromwell — one of Wall Street’s most prestigious firms — filed an emergency motion riddled with fabricated citations and other AI-generated errors. The mistakes were caught not by the firm’s own review process but by opposing counsel. The episode was absurd, but also diagnostic. It showed what happens when a tool that produces fluent output meets a world that demands verifiable truth.

The deeper issue is not merely that AI can be wrong. It is that the cost of errors is changing. As systems become more agentic — as they act autonomously, rather than just generating text or code in response to discrete prompts — mistakes become more consequential. A chatbot that hallucinates a paragraph is annoying. An agent that changes code, moves money, files paperwork, deletes a database, or triggers actions across systems can create real damage at machine speed.

Call it the verification tax. In any setting where someone is accountable for an outcome — law, medicine, regulated finance, engineering, or public policy — an AI output is not a finished product. It is a draft that must be checked. The work does not disappear; it shifts from producing to supervising. Net productivity becomes time saved generating a draft minus time spent ensuring its trustworthiness.

Hence, in a large field study of customer support, a generative AI assistant increased productivity by about 14 percent on average, with much larger gains for novices and little benefit for the most experienced workers. Because the tasks were standardized, the outputs were easier to evaluate, and the tool could distribute best practices quickly.

Source: Korea Times News