The Pentagon’s latest contracts come at a time of anxiety about the potential for over-reliance on the technology on the battlefield, said Helen Toner, interim executive director at Georgetown University’s Center for Security and Emerging Technology.“A lot of modern warfare is based on people sitting in command centers behind monitors, making complicated decisions about confusing, fast-moving situations,” said Toner, a former board member of OpenAI. “AI systems can be helpful in terms of summarizing information or looking at surveillance feeds and trying to identify potential targets.”But questions about the appropriate levels of human involvement, risk and training are still being worked out, she said.“How do you roll out these tools rapidly for them to be effective and provide strategic advantage?” Toner asked, “While also recognizing that you need to train the operators and make sure they know how to use them and don’t over trust them?”Such concerns were raised by Anthropic. The tech company said it wanted assurances in its contract that the military would not use its technology in fully autonomous weapons and the surveillance of Americans. Defense Secretary Pete Hegseth said the company must allow for any uses the Pentagon deemed lawful.Anthropic sued after President Donald Trump, a Republican, tried to stop all federal agencies from using the company’s chatbot Claude and Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March.“As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said.One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

“A lot of modern warfare is based on people sitting in command centers behind monitors, making complicated decisions about confusing, fast-moving situations,” said Toner, a former board member of OpenAI. “AI systems can be helpful in terms of summarizing information or looking at surveillance feeds and trying to identify potential targets.”But questions about the appropriate levels of human involvement, risk and training are still being worked out, she said.“How do you roll out these tools rapidly for them to be effective and provide strategic advantage?” Toner asked, “While also recognizing that you need to train the operators and make sure they know how to use them and don’t over trust them?”Such concerns were raised by Anthropic. The tech company said it wanted assurances in its contract that the military would not use its technology in fully autonomous weapons and the surveillance of Americans. Defense Secretary Pete Hegseth said the company must allow for any uses the Pentagon deemed lawful.Anthropic sued after President Donald Trump, a Republican, tried to stop all federal agencies from using the company’s chatbot Claude and Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March.“As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said.One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

But questions about the appropriate levels of human involvement, risk and training are still being worked out, she said.“How do you roll out these tools rapidly for them to be effective and provide strategic advantage?” Toner asked, “While also recognizing that you need to train the operators and make sure they know how to use them and don’t over trust them?”Such concerns were raised by Anthropic. The tech company said it wanted assurances in its contract that the military would not use its technology in fully autonomous weapons and the surveillance of Americans. Defense Secretary Pete Hegseth said the company must allow for any uses the Pentagon deemed lawful.Anthropic sued after President Donald Trump, a Republican, tried to stop all federal agencies from using the company’s chatbot Claude and Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March.“As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said.One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

“How do you roll out these tools rapidly for them to be effective and provide strategic advantage?” Toner asked, “While also recognizing that you need to train the operators and make sure they know how to use them and don’t over trust them?”Such concerns were raised by Anthropic. The tech company said it wanted assurances in its contract that the military would not use its technology in fully autonomous weapons and the surveillance of Americans. Defense Secretary Pete Hegseth said the company must allow for any uses the Pentagon deemed lawful.Anthropic sued after President Donald Trump, a Republican, tried to stop all federal agencies from using the company’s chatbot Claude and Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March.“As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said.One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

Such concerns were raised by Anthropic. The tech company said it wanted assurances in its contract that the military would not use its technology in fully autonomous weapons and the surveillance of Americans. Defense Secretary Pete Hegseth said the company must allow for any uses the Pentagon deemed lawful.Anthropic sued after President Donald Trump, a Republican, tried to stop all federal agencies from using the company’s chatbot Claude and Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March.“As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said.One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

Anthropic sued after President Donald Trump, a Republican, tried to stop all federal agencies from using the company’s chatbot Claude and Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March.“As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said.One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March.“As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said.One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

“As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said.One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties.Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon.The Pentagon’s point of viewEmil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic.“And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said.Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon.The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil.“Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University.AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military.But people shouldn’t become overly dependent on it.“There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said.Related:Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge AttacksRelated:The Mythos Moment: Enterprises Must Fight Agents with AgentsRelated:Claude Mythos Finds 271 Firefox VulnerabilitiesRelated:OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal

Source: SecurityWeek