So how should organizations and teams respond? Ironically, by treating AI assistants like those junior developers – full of productive and creative potential, but in need of careful oversight. This should serve as an indispensable component of an overall risk management strategy that blends observability, verified developer security skills and benchmarking through the following recommended practices:Establishing rules.Guardrails benefit development teams as they seek to observe and identify patterns when reviewing, testing and reworking AI-assisted code for inconsistencies and errors. Team members must commit to standard rule sets and the execution of thorough code review as a non-negotiable part of their jobs, with the understanding that their human expertise serves as the first line of defense. This will help them stay grounded while distinguishing AI’s value points (greater efficiencies and capacity for breakthroughs) from its potential for harm (failure points and unnecessary risk).Investing in continuous upskillingand learning.In the interest of optimal code review – with teams readily able to discover and fix flaws as they appear – organizations should support hands-on training opportunities that are in line with theSecure by Design initiativefrom the Cybersecurity and Infrastructure Security Agency (CISA). Simply stated, Secure by Design treats cyber defense as a core business requirement rather than a mere technical feature or, worse, an afterthought.The most useful training will include hands-on sessions with real-life scenarios developers routinely encounter. As a result, organizations can implement benchmarking to gauge individual members’ security maturity, and identify where gaps exist that must be addressed.Redefining AI tool assessments.No tool is the same. Many will crank out usable code quickly, but without the nuance needed to comprehend specific cyber defense standards, conventions and policies. Because of this, developers should adjust assessments so every LLM is examined using quantitative metrics, real-world performance in pilot programs and alignment to their organization’s unique requirements.In the best of possible worlds, comprehensive assessments will lead to what we can call “trust scores” which combine the evaluation of tool usage, vulnerability data and secure coding skills to reveal how these products and teams are impacting SDLC risk.In the SDLC, there should be no shortcuts. Developers must view AI as a collaborator tobeclosely monitored, rather than an autonomous entity to be unleashed. Without such a mindset, crippling tech debt is inevitable.That’s why organizations have to work with teams to implement new rules, controls, metrics, assessments and upskilling. With this, they will best position themselves to minimize tech debt and mitigate risk, while taking advantage of all of the benefits that AI brings.Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

Establishing rules.Guardrails benefit development teams as they seek to observe and identify patterns when reviewing, testing and reworking AI-assisted code for inconsistencies and errors. Team members must commit to standard rule sets and the execution of thorough code review as a non-negotiable part of their jobs, with the understanding that their human expertise serves as the first line of defense. This will help them stay grounded while distinguishing AI’s value points (greater efficiencies and capacity for breakthroughs) from its potential for harm (failure points and unnecessary risk).Investing in continuous upskillingand learning.In the interest of optimal code review – with teams readily able to discover and fix flaws as they appear – organizations should support hands-on training opportunities that are in line with theSecure by Design initiativefrom the Cybersecurity and Infrastructure Security Agency (CISA). Simply stated, Secure by Design treats cyber defense as a core business requirement rather than a mere technical feature or, worse, an afterthought.The most useful training will include hands-on sessions with real-life scenarios developers routinely encounter. As a result, organizations can implement benchmarking to gauge individual members’ security maturity, and identify where gaps exist that must be addressed.Redefining AI tool assessments.No tool is the same. Many will crank out usable code quickly, but without the nuance needed to comprehend specific cyber defense standards, conventions and policies. Because of this, developers should adjust assessments so every LLM is examined using quantitative metrics, real-world performance in pilot programs and alignment to their organization’s unique requirements.In the best of possible worlds, comprehensive assessments will lead to what we can call “trust scores” which combine the evaluation of tool usage, vulnerability data and secure coding skills to reveal how these products and teams are impacting SDLC risk.In the SDLC, there should be no shortcuts. Developers must view AI as a collaborator tobeclosely monitored, rather than an autonomous entity to be unleashed. Without such a mindset, crippling tech debt is inevitable.That’s why organizations have to work with teams to implement new rules, controls, metrics, assessments and upskilling. With this, they will best position themselves to minimize tech debt and mitigate risk, while taking advantage of all of the benefits that AI brings.Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

Investing in continuous upskillingand learning.In the interest of optimal code review – with teams readily able to discover and fix flaws as they appear – organizations should support hands-on training opportunities that are in line with theSecure by Design initiativefrom the Cybersecurity and Infrastructure Security Agency (CISA). Simply stated, Secure by Design treats cyber defense as a core business requirement rather than a mere technical feature or, worse, an afterthought.The most useful training will include hands-on sessions with real-life scenarios developers routinely encounter. As a result, organizations can implement benchmarking to gauge individual members’ security maturity, and identify where gaps exist that must be addressed.Redefining AI tool assessments.No tool is the same. Many will crank out usable code quickly, but without the nuance needed to comprehend specific cyber defense standards, conventions and policies. Because of this, developers should adjust assessments so every LLM is examined using quantitative metrics, real-world performance in pilot programs and alignment to their organization’s unique requirements.In the best of possible worlds, comprehensive assessments will lead to what we can call “trust scores” which combine the evaluation of tool usage, vulnerability data and secure coding skills to reveal how these products and teams are impacting SDLC risk.In the SDLC, there should be no shortcuts. Developers must view AI as a collaborator tobeclosely monitored, rather than an autonomous entity to be unleashed. Without such a mindset, crippling tech debt is inevitable.That’s why organizations have to work with teams to implement new rules, controls, metrics, assessments and upskilling. With this, they will best position themselves to minimize tech debt and mitigate risk, while taking advantage of all of the benefits that AI brings.Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

The most useful training will include hands-on sessions with real-life scenarios developers routinely encounter. As a result, organizations can implement benchmarking to gauge individual members’ security maturity, and identify where gaps exist that must be addressed.Redefining AI tool assessments.No tool is the same. Many will crank out usable code quickly, but without the nuance needed to comprehend specific cyber defense standards, conventions and policies. Because of this, developers should adjust assessments so every LLM is examined using quantitative metrics, real-world performance in pilot programs and alignment to their organization’s unique requirements.In the best of possible worlds, comprehensive assessments will lead to what we can call “trust scores” which combine the evaluation of tool usage, vulnerability data and secure coding skills to reveal how these products and teams are impacting SDLC risk.In the SDLC, there should be no shortcuts. Developers must view AI as a collaborator tobeclosely monitored, rather than an autonomous entity to be unleashed. Without such a mindset, crippling tech debt is inevitable.That’s why organizations have to work with teams to implement new rules, controls, metrics, assessments and upskilling. With this, they will best position themselves to minimize tech debt and mitigate risk, while taking advantage of all of the benefits that AI brings.Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

Redefining AI tool assessments.No tool is the same. Many will crank out usable code quickly, but without the nuance needed to comprehend specific cyber defense standards, conventions and policies. Because of this, developers should adjust assessments so every LLM is examined using quantitative metrics, real-world performance in pilot programs and alignment to their organization’s unique requirements.In the best of possible worlds, comprehensive assessments will lead to what we can call “trust scores” which combine the evaluation of tool usage, vulnerability data and secure coding skills to reveal how these products and teams are impacting SDLC risk.In the SDLC, there should be no shortcuts. Developers must view AI as a collaborator tobeclosely monitored, rather than an autonomous entity to be unleashed. Without such a mindset, crippling tech debt is inevitable.That’s why organizations have to work with teams to implement new rules, controls, metrics, assessments and upskilling. With this, they will best position themselves to minimize tech debt and mitigate risk, while taking advantage of all of the benefits that AI brings.Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

In the best of possible worlds, comprehensive assessments will lead to what we can call “trust scores” which combine the evaluation of tool usage, vulnerability data and secure coding skills to reveal how these products and teams are impacting SDLC risk.In the SDLC, there should be no shortcuts. Developers must view AI as a collaborator tobeclosely monitored, rather than an autonomous entity to be unleashed. Without such a mindset, crippling tech debt is inevitable.That’s why organizations have to work with teams to implement new rules, controls, metrics, assessments and upskilling. With this, they will best position themselves to minimize tech debt and mitigate risk, while taking advantage of all of the benefits that AI brings.Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

In the SDLC, there should be no shortcuts. Developers must view AI as a collaborator tobeclosely monitored, rather than an autonomous entity to be unleashed. Without such a mindset, crippling tech debt is inevitable.That’s why organizations have to work with teams to implement new rules, controls, metrics, assessments and upskilling. With this, they will best position themselves to minimize tech debt and mitigate risk, while taking advantage of all of the benefits that AI brings.Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

That’s why organizations have to work with teams to implement new rules, controls, metrics, assessments and upskilling. With this, they will best position themselves to minimize tech debt and mitigate risk, while taking advantage of all of the benefits that AI brings.Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

Related:Vibe Coding’s Real Problem Isn’t Bugs—It’s Judgment

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations.

Source: SecurityWeek