An Anthropic safety researcher quit, saying the âworld is in perilâ in part over AI advances.
Mrinank Sharma said the safety team âconstantly [faces] pressures to set aside what matters most,â citing concerns about bioterrorism and other risks.
Anthropic was founded with the explicit goal of creating safe AI; its CEO Dario Amodei said at Davos that AI progress is going too fast andcalled for regulationto force industry leaders to slow down.
Other AI safety researchers have left leading firms, citing concerns about catastrophic risks. Two key members of OpenAIâs âSuperalignmentâ team, tasked with steering AI development, quit in 2024, saying the company emphasized financial gain over minimizing the dangers of building âAI systems much smarter than us.â
Source: Drudge Report