The agentic era has amplified the challenges of vulnerability management (i.e., static and incomplete scans) because of the asymmetry between threat actors and defenders.Threat actors are leveraging agentic AI capable of continuous, autonomous actions that can reason, plan, use tools and execute multi-step tasks toward their goals. Agentic AI is able to identify and exploit vulnerabilities at the speed of compute, andcompute is expected to increase 3x in 2026, according to numbers from OpenAI.Simultaneously, vulnerability management is stuck with the same time-consuming and error-prone manual tasks of the past decade: periodic scanning, incomplete visibility and generic CVE severity scores.The outcome of legacy vulnerability management programs is “a list, not a fix.” Metrics for these programs have tended to focus on the number of vulnerabilities, rather than time to remediation or a demonstrable reduction in risk.The Engineering Productivity ParadoxThe velocity gap between threat actors and defenders is also widening internally between DevOps and SecOps. AI-assisted development has shifted programming from syntax to design briefs. Tools like Cursor now generate nearly a billion lines of accepted code per day.This new dynamic recalls the same old refrain of productivity vs. security. Many cybersecurity professionals view vibe coding as nothing more than vulnerability-as-a-service. For example,Moltbook, a vibe-coded social network for OpenClaw agents, exposed 1.5 million APIs.The security gap is massive. According to SonarSource’s State of Code survey (PDF), 96% of developers do not fully trust AI-generated code, but only 48% verify it before committing it to production. A Q3 2025 report (PDF) from Armis finds that AI-generated code is embedding hardcoded secrets, misconfigured communication protocols and vulnerable software libraries at scale.The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
Threat actors are leveraging agentic AI capable of continuous, autonomous actions that can reason, plan, use tools and execute multi-step tasks toward their goals. Agentic AI is able to identify and exploit vulnerabilities at the speed of compute, andcompute is expected to increase 3x in 2026, according to numbers from OpenAI.Simultaneously, vulnerability management is stuck with the same time-consuming and error-prone manual tasks of the past decade: periodic scanning, incomplete visibility and generic CVE severity scores.The outcome of legacy vulnerability management programs is “a list, not a fix.” Metrics for these programs have tended to focus on the number of vulnerabilities, rather than time to remediation or a demonstrable reduction in risk.The Engineering Productivity ParadoxThe velocity gap between threat actors and defenders is also widening internally between DevOps and SecOps. AI-assisted development has shifted programming from syntax to design briefs. Tools like Cursor now generate nearly a billion lines of accepted code per day.This new dynamic recalls the same old refrain of productivity vs. security. Many cybersecurity professionals view vibe coding as nothing more than vulnerability-as-a-service. For example,Moltbook, a vibe-coded social network for OpenClaw agents, exposed 1.5 million APIs.The security gap is massive. According to SonarSource’s State of Code survey (PDF), 96% of developers do not fully trust AI-generated code, but only 48% verify it before committing it to production. A Q3 2025 report (PDF) from Armis finds that AI-generated code is embedding hardcoded secrets, misconfigured communication protocols and vulnerable software libraries at scale.The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
Simultaneously, vulnerability management is stuck with the same time-consuming and error-prone manual tasks of the past decade: periodic scanning, incomplete visibility and generic CVE severity scores.The outcome of legacy vulnerability management programs is “a list, not a fix.” Metrics for these programs have tended to focus on the number of vulnerabilities, rather than time to remediation or a demonstrable reduction in risk.The Engineering Productivity ParadoxThe velocity gap between threat actors and defenders is also widening internally between DevOps and SecOps. AI-assisted development has shifted programming from syntax to design briefs. Tools like Cursor now generate nearly a billion lines of accepted code per day.This new dynamic recalls the same old refrain of productivity vs. security. Many cybersecurity professionals view vibe coding as nothing more than vulnerability-as-a-service. For example,Moltbook, a vibe-coded social network for OpenClaw agents, exposed 1.5 million APIs.The security gap is massive. According to SonarSource’s State of Code survey (PDF), 96% of developers do not fully trust AI-generated code, but only 48% verify it before committing it to production. A Q3 2025 report (PDF) from Armis finds that AI-generated code is embedding hardcoded secrets, misconfigured communication protocols and vulnerable software libraries at scale.The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
The outcome of legacy vulnerability management programs is “a list, not a fix.” Metrics for these programs have tended to focus on the number of vulnerabilities, rather than time to remediation or a demonstrable reduction in risk.The Engineering Productivity ParadoxThe velocity gap between threat actors and defenders is also widening internally between DevOps and SecOps. AI-assisted development has shifted programming from syntax to design briefs. Tools like Cursor now generate nearly a billion lines of accepted code per day.This new dynamic recalls the same old refrain of productivity vs. security. Many cybersecurity professionals view vibe coding as nothing more than vulnerability-as-a-service. For example,Moltbook, a vibe-coded social network for OpenClaw agents, exposed 1.5 million APIs.The security gap is massive. According to SonarSource’s State of Code survey (PDF), 96% of developers do not fully trust AI-generated code, but only 48% verify it before committing it to production. A Q3 2025 report (PDF) from Armis finds that AI-generated code is embedding hardcoded secrets, misconfigured communication protocols and vulnerable software libraries at scale.The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
The Engineering Productivity ParadoxThe velocity gap between threat actors and defenders is also widening internally between DevOps and SecOps. AI-assisted development has shifted programming from syntax to design briefs. Tools like Cursor now generate nearly a billion lines of accepted code per day.This new dynamic recalls the same old refrain of productivity vs. security. Many cybersecurity professionals view vibe coding as nothing more than vulnerability-as-a-service. For example,Moltbook, a vibe-coded social network for OpenClaw agents, exposed 1.5 million APIs.The security gap is massive. According to SonarSource’s State of Code survey (PDF), 96% of developers do not fully trust AI-generated code, but only 48% verify it before committing it to production. A Q3 2025 report (PDF) from Armis finds that AI-generated code is embedding hardcoded secrets, misconfigured communication protocols and vulnerable software libraries at scale.The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
The velocity gap between threat actors and defenders is also widening internally between DevOps and SecOps. AI-assisted development has shifted programming from syntax to design briefs. Tools like Cursor now generate nearly a billion lines of accepted code per day.This new dynamic recalls the same old refrain of productivity vs. security. Many cybersecurity professionals view vibe coding as nothing more than vulnerability-as-a-service. For example,Moltbook, a vibe-coded social network for OpenClaw agents, exposed 1.5 million APIs.The security gap is massive. According to SonarSource’s State of Code survey (PDF), 96% of developers do not fully trust AI-generated code, but only 48% verify it before committing it to production. A Q3 2025 report (PDF) from Armis finds that AI-generated code is embedding hardcoded secrets, misconfigured communication protocols and vulnerable software libraries at scale.The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
This new dynamic recalls the same old refrain of productivity vs. security. Many cybersecurity professionals view vibe coding as nothing more than vulnerability-as-a-service. For example,Moltbook, a vibe-coded social network for OpenClaw agents, exposed 1.5 million APIs.The security gap is massive. According to SonarSource’s State of Code survey (PDF), 96% of developers do not fully trust AI-generated code, but only 48% verify it before committing it to production. A Q3 2025 report (PDF) from Armis finds that AI-generated code is embedding hardcoded secrets, misconfigured communication protocols and vulnerable software libraries at scale.The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
The security gap is massive. According to SonarSource’s State of Code survey (PDF), 96% of developers do not fully trust AI-generated code, but only 48% verify it before committing it to production. A Q3 2025 report (PDF) from Armis finds that AI-generated code is embedding hardcoded secrets, misconfigured communication protocols and vulnerable software libraries at scale.The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
The speed of AI adoption in 2026 means these metrics may quickly become as outdated as static vulnerability scanning, but the direction is clear: vibe coding generates vulnerabilities faster than human teams can audit them.If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
If VibeOps is a new car, then vulnerability management is a horse-and-buggy.Resilience is Progress, Not PerfectionCybersecurity teams must practice resilience each day, so they are prepared when it matters most.The path forward requires building a knowledge graph for security operations, ingesting telemetry from IT, OT, IoT, cloud, identity and application layers.This security data fabric maps relationships between assets, correlates the likelihood of exploitation with the impact on business, and accounts for existing mitigating or compensating controls. This context enables prioritized remediation of business-critical risks rather than relying on generic severity.If organizations can achieve this level of continuous, comprehensive and contextualized visibility, they will be well on their way to vulnerability management 10.0.Ultimately, the goal is agentic remediation, but it is understandable that security operations may be skeptical. After all, aren’t AI agents responsible for this mess? To allay those concerns, agentic remediation should be adopted in phases.In phase one, agents will discover vulnerabilities, identify the correct fix and open change management tickets. Critically, the human remains in the loop. To continue the horse-and-buggy metaphor, phase one is like engaging autopilot in a car: the driver still maintains control.The primary barrier to adoption of agentic remediation is trust, which is built through demonstrated reliability with human oversight.In phase two, agents will begin to act directly in unambiguous scenarios where the correct response is deterministic. For example, if a developer commits a hardcoded secret to a public repository or a cloud storage bucket is misconfigured to allow public access, they may be automatically remediated. These are “known bad” conditions where the risk of inaction is immediate.A Better OutcomeWe are witnessing the emergence of AI-enabled persistent threats – from APTs to “AiPTs.” Nation-state cyberattacks have become more powerful than ever before, and the barrier to entry has never been lower. I will be discussing this further during myRSAC 2026 Keynote Address, “AI vs. AI: How to Reshape Defense Faster than Attackers Reshape Offense.”The shift toward vulnerability management in the agentic era must change how we measure success. If we want to measure mean time to remediation and verifiable risk reduction, then we must decouple discovery from remediation. Today, the same teams and workflows handle both discovery and remediation, creating bottlenecks.Security teams need to leverage solutions that eliminate these bottlenecks and promote greater efficiency by decreasing the number of scans and their duration, which in turn reduces network impact.The promise of vulnerability management 10.0 is that agentic systems can continuously sanitize the network, address “known bad” issues, enforce configuration baselines, rotate exposed credentials and more. Human experts can focus on the work that demands human judgment: migration initiatives, complex architectural challenges and strategic risk decisions. Threat actors have already made the jump to machine-speed attacks. The defenders who match that speed with appropriate human oversight will define resilience in the agentic era.Related:From Open Source to OpenAI: The Evolution of Third-Party Risk
Source: SecurityWeek