Dario Amodei, the CEO of Anthropic, offered a deceptively simple line in a recent interview, saying that “growth and economic value will come very easily” with powerful artificial intelligence (AI), but “distribution of benefits, distribution of wealth, political freedom” will not.
That sentence captures the core risks of the AI era. There are very convincing arguments that claim the root of societal collapse is the inequality of wealth and power among its citizens. AI supercharges that inequality to a level that we have never known. It’s not whether AI will be transformative. It will. The problem is whether the incentives governing AI will pull society toward cohesion or fracture.
Three forces are converging: AI monetization via advertising, AI-driven disruption of current economic and sociopolitical infrastructure, and the rise of agentic systems that act autonomously in domains where human governance moves too slowly.
Frontier AI is expensive. We’ve all seen investment numbers that defy understanding. Training, serving and scaling models requires capital and compute at a staggering level. This reality pushes companies toward business models that can monetize at scale, and advertising is the most proven way for doing that, as proven by legacy social media platforms.
But an AI assistant is a confessional. People reveal anxieties they would not post publicly. They workshop relationship problems, political doubts, medical fears, and financial secrets. When that intimate stream becomes fuel for targeting, the risk is not simply privacy loss. It is behavior controlling that goes far beyond the Facebook Cambridge Analytica scandal. Anthropic’s Superbowl commercial mocking Open AI’s decision to carry advertising was humorous but hides a real foundational dilemma.
This is where tribalization becomes an existential social risk. Social platforms polarized society by rabbitholing engagement based on ingrained biases. If an AI assistant knows what makes you feel seen, what makes you angry or what scares you, then persuasion becomes personal, continuous and subconscious, all for the benefit of the advertiser. The nightmare scenario is not merely better ads. It is political operatives and advertisers renting access to the most detailed psychological profile ever assembled, not inferred from clicks but volunteered in moments of vulnerability. The real question is whether governance can keep pace before intimate persuasion becomes the default business model. At that point, free will is not merely a philosophical debate; it’s the currency of AI business.
Amodei’s distribution warning is not abstract. The IMF has estimated that about 40 percent of global employment is exposed to AI, and in advanced economies about 60 percent of jobs may be impacted, with a meaningful share facing reduced labor demand or disappearance.
IMF Managing Director Kristalina Georgieva called AI a “tsunami” for labor markets, warning that entry-level roles, often the on-ramp for younger workers, could be wiped out. Amodei described the current “centaur phase” in software engineering and warned that white-collar disruption could unfold in a “low single-digit numbers of years,” not decades.
Even if AI creates trillions in GDP, distribution is the hard part. If productivity gains accrue mainly to owners of models, chips, data and platforms, then the middle class becomes a historical artifact. Societies can survive technological change but they struggle to survive technological change plus concentrated gains plus eroded dignity.
This is where politics becomes inevitable. A future of abundance without distribution is not stability; it is grievance with better tools.
Source: Korea Times News