Alegal dispute between OpenAI and co-founder ElonMuskhasescalated into a wider debate about the existential risks of artificial intelligence, after Musk warned in court that unchecked AI development could pose a threat to humanity.

Testifying during proceedings over OpenAI's corporate structure and mission, Musk described what he called the 'worst-case situation' as a 'Terminator situation,' arguing that advanced AI systems, if developed without strict safeguards, could become uncontrollable and dangerous to human life.

He warned in court that 'the biggest risk would be that AI kills us all,' framing the lawsuit as more than a governance dispute and instead as a question of long-term human survival. According to him, 'that is the outcome we need to avoid, and it requires being extremely careful about how these systems are developed.'

The case centres on Musk's allegation that OpenAI has drifted from its original nonprofit mission and become increasingly profit-driven following major investment deals, including partnerships with large technology firms.

However, Musk repeatedly shifted the focus away from corporate governance, instead stressing what he views as the broader danger posed by advanced AI systems. He argued that the pace of development demands extreme caution, particularly as AI models grow more autonomous and capable.

His testimony reflects long-standing views he has expressed about artificial intelligence, where he has previously described it as one of the most serious long-term risks facing humanity.

During proceedings, Musk's references to science-fiction-style outcomes, including 'Terminator' scenarios, drew observers' attention and appeared to frustrate the court at times, with the judge urging a focus on legal rather than speculative arguments.

Despite this, Musk continued to link the case to broader concerns about the trajectory of AI development, suggesting that safety considerations were central to how organisations like OpenAI should operate.

His comments align with a wider debate within the AI industry about so-called existential risk, the possibility that advanced artificial intelligence could, in extreme scenarios, cause irreversible harm to humanity.

OpenAI has pushed back against Musk's arguments, maintaining that its evolution into a for-profit structure was necessary to secure the funding required to build and scale advanced AI systems.

Source: International Business Times UK