Prank prompt injections can, for instance, instruct visiting AI assistants to change their behavior (eg, act like a baby bird and tweet like a bird).Some website owners place helpful instructions for AI tasked with summarizing a site, but others add prompts designed to prevent assistants from crawling the website, including by telling the AI that the content is dangerous and sensitive.Google researchers have also come across websites whose administrators attempt to boost SEO by instructing AI assistants to claim their company is the best.The most important, however, from a security standpoint are the malicious prompt injection attempts. The researchers uncovered two types of such attacks: exfiltration and destruction.Some websites contained prompts instructing AI to collect data, including IPs and credentials, and send it to an attacker-specified email address.“However, for this class of attacks, sophistication seemed much lower,” the Google researchers said, adding, “We did not observe significant amounts of advanced attacks (eg, using known exfiltration prompts published by security researchers in 2025). This seems to indicate that attackers have yet not productionized this research at scale.”In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

Some website owners place helpful instructions for AI tasked with summarizing a site, but others add prompts designed to prevent assistants from crawling the website, including by telling the AI that the content is dangerous and sensitive.Google researchers have also come across websites whose administrators attempt to boost SEO by instructing AI assistants to claim their company is the best.The most important, however, from a security standpoint are the malicious prompt injection attempts. The researchers uncovered two types of such attacks: exfiltration and destruction.Some websites contained prompts instructing AI to collect data, including IPs and credentials, and send it to an attacker-specified email address.“However, for this class of attacks, sophistication seemed much lower,” the Google researchers said, adding, “We did not observe significant amounts of advanced attacks (eg, using known exfiltration prompts published by security researchers in 2025). This seems to indicate that attackers have yet not productionized this research at scale.”In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

Google researchers have also come across websites whose administrators attempt to boost SEO by instructing AI assistants to claim their company is the best.The most important, however, from a security standpoint are the malicious prompt injection attempts. The researchers uncovered two types of such attacks: exfiltration and destruction.Some websites contained prompts instructing AI to collect data, including IPs and credentials, and send it to an attacker-specified email address.“However, for this class of attacks, sophistication seemed much lower,” the Google researchers said, adding, “We did not observe significant amounts of advanced attacks (eg, using known exfiltration prompts published by security researchers in 2025). This seems to indicate that attackers have yet not productionized this research at scale.”In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

The most important, however, from a security standpoint are the malicious prompt injection attempts. The researchers uncovered two types of such attacks: exfiltration and destruction.Some websites contained prompts instructing AI to collect data, including IPs and credentials, and send it to an attacker-specified email address.“However, for this class of attacks, sophistication seemed much lower,” the Google researchers said, adding, “We did not observe significant amounts of advanced attacks (eg, using known exfiltration prompts published by security researchers in 2025). This seems to indicate that attackers have yet not productionized this research at scale.”In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

Some websites contained prompts instructing AI to collect data, including IPs and credentials, and send it to an attacker-specified email address.“However, for this class of attacks, sophistication seemed much lower,” the Google researchers said, adding, “We did not observe significant amounts of advanced attacks (eg, using known exfiltration prompts published by security researchers in 2025). This seems to indicate that attackers have yet not productionized this research at scale.”In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

“However, for this class of attacks, sophistication seemed much lower,” the Google researchers said, adding, “We did not observe significant amounts of advanced attacks (eg, using known exfiltration prompts published by security researchers in 2025). This seems to indicate that attackers have yet not productionized this research at scale.”In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

In the destruction category, some prompts attempted to trick AI into deleting all files on the user’s machine, but the researchers noted that such attacks are unlikely to succeed.While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

While they did not see any particularly sophisticated attacks, the Google experts pointed out that they did see a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. They warned that both the scale and sophistication of prompt injection attacks are expected to increase in the near future.“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

“Our findings indicate that, while past attempts at IPI attacks on the web have been low in sophistication, their upward trend suggests that the threat is maturing and will soon grow in both scale and complexity,” the researchers concluded.Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

Related:Why Cybersecurity Must Rethink Defense in the Age of Autonomous AgentsRelated:Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US

Source: SecurityWeek