At Davos on Jan. 22, Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind made a striking admission. Asked whether they would welcome slowing down artificial intelligence (AI) development, both said yes. "Maybe it would be good to have a slightly slower pace," Hassabis ventured, "so that we can get this right societally." Amodei agreed: "I would prefer that. I think that would be better for the world."
Then came the catch. Such restraint would require "international collaboration," Mr Hassabis said. Amodei was blunter: "It's very hard to have an enforceable agreement where they slow down and we slow down." The message was one of helplessness. Two of the world's most powerful technology executives, freely admitting they are trapped in a race neither chose and neither can escape alone.
This should concern us. The people building what may become the most consequential technology in human history are publicly asking for help to slow down, and no one is answering. The United States, consumed by great-power competition with China, is in no mood to discuss restraint. China, for its part, is not about to take direction from Washington. The superpowers are deadlocked. If coordination is to happen, it will not come from them.
Enter South Korea and other middle powers. While they may not be home to the leading frontier AI labs, they have something the superpowers do not: the ability to convene without being seen as advancing hegemonic interests, and the collective market leverage to make standards stick.
South Korea is particularly well positioned to help lead such coalitions. It combines advanced technological and industrial capacity along the AI value chain (including high-bandwidth memory) with diplomatic credibility as a bridge between the United States, China, Asia and Europe. Its successful hosting of 2025 the Asia-Pacific Economic Cooperation (APEC) summit held in Gyeongju, North Gyeongsang Province, focused on AI demonstrated precisely how a middle power can convene economies with divergent interests around practical outcomes. Building on that momentum, APEC offers a concrete platform to form coalitions on AI safety: groups of economies agreeing on shared safety tests for advanced models, common reporting of serious AI incidents and baseline safeguards for high-impact systems before deployment.
The country’s likely co-chairmanship this year of the Global Partnership on Artificial Intelligence, alongside Singapore, further illustrates how middle powers can jointly steer global coordination. Hosted within the Organization for Economic Co-operation and Development, the GPAI links technical experts, governments and industry to translate research into policy-ready standards. A coalition anchored there could align evaluation methods, independent audits and disclosure requirements across markets.
The skeptics will note that previous coordination attempts have failed. In 2023, an open letter calling for a six-month pause on advanced AI training gathered thousands of signatures and was ignored within weeks. But that effort asked for unilateral restraint without coordination infrastructure. The moment competitive pressure resumed, the pause collapsed. Moratoriums without mechanisms do not work.
In practice, coalitions built around shared standards do not freeze innovation; they slow the most dangerous racing dynamics by creating common checkpoints that everyone must pass, allowing leading labs to move forward together rather than fear being undercut. South Korea, the European Union, the U.K., India, Brazil and other large markets represent billions of users, and frontier AI companies will comply with standards to access them.
Amodei made this point, perhaps inadvertently, at Davos. If chip exports to China could be restricted, he argued, "this isn't a question of competition between the U.S. and China. This is a question of competition between me and Demis, which I'm very confident we can work out." In other words: remove the geopolitical dimension, and the Western labs can coordinate. Middle powers cannot remove the U.S.-China rivalry. But they can build coordination infrastructure that functions regardless of it.
The window for this is not open indefinitely. Both Amodei and Hassabis suggested that AI systems capable of recursive self-improvement may arrive within one to five years. The infrastructure needs to be built now, while the lab leaders themselves are signaling openness, and before the technology makes restraint impossible.
Source: Korea Times News