Artificial Intelligence

AI is Changing Vulnerability Discovery and your Software Supply Chain Strategy has to Change with it

|Last updated on Apr 23, 2026|xx min read
AI is Changing Vulnerability Discovery and your Software Supply Chain Strategy has to Change with it

Wade Woolwine is Senior Director, Product Security at Rapid7.

The headlines around Glasswing have focused on how quickly AI can surface vulnerabilities, which has naturally caught the attention of security leaders. In my conversations with teams and customers, the more useful discussion has been about what that speed means in practice for business protection, especially across open source risk, dependency choices, and software supply chain resilience. The deeper issue for security leaders sits elsewhere. 

Software risk is becoming harder to manage across the full lifecycle, especially in open source dependencies, build pipelines, developer environments, and the operational processes that sit between disclosure and remediation. When vulnerabilities can be found faster and at greater depth, security teams need more than another source of findings. They need a stronger way to understand what they run, what they trust, what they can patch quickly, and where a single weak dependency can create disproportionate risk.

Faster discovery makes software supply chain resilience a more immediate leadership issue. CISOs need a clearer view of how dependencies are chosen, monitored, validated, and governed across production, build, and developer environments, especially as open source remains essential to modern software development.

Organizations already struggle to absorb vulnerability disclosures at the pace they are coming in, because when discovery gets faster, the operational gap widens between knowing there is a problem and being able to do something useful about it. That gap is especially serious in the software supply chain, where a single dependency can introduce risk into build systems, production workloads, developer endpoints, and the tools used to secure them.

This is why I would frame AI-driven vulnerability discovery risk as a lifecycle challenge. The pressure does not sit in one place, but across inventory, dependency decisions, threat intelligence, patching discipline, and validation – with people, process, and visibility shaping how well an organization can respond. Technology matters, but it cannot compensate for a weak operating model underneath it.

Open source still matters. Dependency choices matter more.

Open source remains essential to modern software development because it helps teams move faster and get products to market without rebuilding common functionality from scratch. The better response is to be more deliberate about where and how third-party code enters the environment. 

Open source has always involved a trade-off between speed, efficiency, flexibility, and inherited risk, and that trade-off becomes harder to manage as AI makes code review deeper and faster. More flaws and supply chain compromises will likely be found in packages that teams have trusted for years, including transitive dependencies most developers did not knowingly choose. One only needs to look back a few weeks to find that the widely used Axios package suffered a supply chain compromise that bundled a Remote Access Trojan (RAT) charged with stealing secrets. That raises the value of understanding which dependencies are essential, which ones can be removed, which ones pull in large chains of transitives, and which ones are maintained by too few people to inspire confidence.

That work starts with a more disciplined question than “Is there a package that does this?” It starts with “Do we need this dependency, and do we understand the risk that comes with it?” The safest dependency is often the one that never enters the environment in the first place.

Why inventory has to go deeper than package lists

Supply chain resilience begins with knowing what you are actually running, which sounds straightforward until a critical disclosure lands in a package no one realized was in the environment three layers deep. Dependency graphs are deeper than most teams think, and transitive risk is where a lot of operational pain begins. A package chosen directly by a developer may bring in dozens of additional packages, each with its own maintainers, release cadence, security posture, and potential failure points.

A mature approach to inventory needs to move beyond a static package list, because CISOs need confidence in three views at once: What is declared in source, what is resolved and built, and what is actually running in production? Those views often drift apart over time, which means a package can be patched in source and still remain unpatched in a deployed container or runtime environment. An SBOM on its own will not close that gap; continuous, usable inventory will.

That inventory also needs clear ownership attached to it, because the moment a critical dependency is identified, someone has to decide what happens next, coordinate the change, and absorb the operational consequences. Security teams cannot do that well if responsibility is unclear, which is why ownership needs to be treated as part of resilience rather than an administrative detail.

Build pipelines and developer environments deserve the same scrutiny as production

Supply chain conversations still tend to start with production systems, even though recent incidents have shown how quickly compromise can move through the build layer, developer tooling, or the security tooling inside the pipeline itself. Those environments hold code, secrets, and trust relationships that attackers know how to exploit, while developer workstations often carry a rich mix of credentials and elevated privileges because speed matters to the business. Build systems are predictable and privileged, which makes them both valuable and vulnerable, but also easier to monitor.

Seeing those layers as part of the same attack surface means asking harder questions about how code enters the build, how package updates are governed, how actions and dependencies are pinned, what secrets exist in CI/CD, and what controls are in place on developer endpoints to detect anomalous behavior or stop high-risk package activity before it goes unnoticed.

You can gauge the maturity of the operating model with the answers to a few basic questions:

  • How tightly are dependencies controlled in CI?

  • How are package lifecycle scripts governed?

  • What secrets exist in CI/CD, and what protections surround them?

  • What visibility exists into anomalous behavior on developer endpoints?

  • How would the team detect or prevent high-risk package activity before it spreads?

If those answers are unclear, important parts of the model are still missing.

Why prioritization matters more as scanning accelerates

When software risk rises, the instinct is often to add another scanner because more visibility feels like progress. What matters more over time, though, is how well teams can prioritize the findings that follow, assign them to the right owner, choose the right mitigation, and prove that exposure actually went down. Broader scanning and faster discovery mostly add to the pile unless the operating model behind them is strong enough to turn findings into action. Feed more issues into a process that is already stretched and the backlog grows, priorities become harder to sort, and remediation slows in the places where speed matters most. The organizations that come through this period well will be the ones that treat supply chain resilience as a systems problem, with stronger intake, clearer governance, better intelligence, and faster paths from alert to action.

What stronger software supply chain resilience looks like in practice

A stronger response starts with a deeper inventory of dependencies across source, build, and runtime, so teams can see both direct and transitive packages and connect them back to real environments and real owners. Once that picture is in place, intelligence monitoring becomes far more useful when it runs continuously against credible signals on vulnerabilities, package risk, maintainer health, end-of-life software, and unusual changes in dependency behavior.

The same level of care needs to carry through into dependency governance, where better decisions depend on asking whether a new package is necessary, how much transitive risk it introduces, whether its maintenance model is healthy, and what policy governs its path into production. Build and developer controls belong in that same conversation, because version pinning, private registries, secret handling, script restrictions, immutable builds, ephemeral runners, and stronger endpoint monitoring all reduce the attack surface around the software supply chain.

Monitoring threat intelligence for notifications about new vulnerabilities and compromised packages and having a well defined and practiced process for scoping and remediating emerging threats becomes critical. Your supply chain vulnerability and compromise response should be practiced – just like your incident response plan – through table top exercises and simulated threat events. You don’t want to wait until the house is on fire to know how to execute an effective response.

Similarly, Engineering, DevOps, and Security teams should collaborate on establishing a trust and reputation scoring mechanism for supply chain dependencies. Being able to evaluate the speed of response, transparency of communication and updates, and ultimate resolution of the vulnerability or compromise speak volumes for how much you can trust the maintainers of the software you depend on. The OpenSSF Scorecard project offers a great place to start evaluating the open source packages you’re already using.

Organizations should also have a fallback plan for when obtaining a security patch is not available. Some options to consider include exploring other open source packages that perform similar functions, exploring other mitigations such as application firewalling, or even forking and contributing a security patch back to the community.

Validation closes the loop by showing whether the artifact came from where it was supposed to, whether the package has drifted in unexpected ways, and whether the mitigations applied are reducing live risk rather than simply documenting the process.

How CISOs should think about the next 12 months

The strain on security teams is only growing, and the potential for AI to relieve some of that pressure is understandably compelling, especially when boards, CEOs, and CFOs are asking how the organization plans to adopt it. That makes this a leadership question as much as a technology one. CISOs need a clear point of view on where AI can genuinely improve resilience, where it still introduces too much uncertainty, and how to explain those choices in business terms.

If software engineering teams are already adopting AI-assisted development, security teams should be part of that conversation early, especially around dependency management. I have seen teams begin connecting AI coding agents to vulnerability management workflows so those agents can interpret vulnerabilities found in the code base, assess reachability with more context, help plan remediation, and validate updates much faster than traditional handoffs usually allow. Used well, that can reduce drag across the workflow and help teams move faster on classes of issues that are currently slowing them down.

Getting there safely still depends on the foundation underneath it. A more resilient path starts with a clearer picture of the environment and a more complete inventory of dependencies across source, build, and runtime. From there, ownership needs to be explicit, threat and vulnerability intelligence needs to be embedded into how the organization prioritizes, and dependency sprawl needs to be reduced with more discipline around what actually enters production. The same mindset should carry through to the build layer and developer endpoints, where tighter controls and better visibility help reduce unnecessary exposure, while faster and more repeatable paths from disclosure to action make it easier for teams to respond before risk compounds.

That foundation will matter regardless of which AI model or platform becomes dominant six or twelve months from now. It will also matter if the next wave of AI makes backlog reduction, lower-tier remediation, or patch validation more practical. Organizations that know what they run and how they operate will be in a much better position to adopt those capabilities with intent.

The shift security leaders should make now

Security in an AI-accelerated world needs to be managed as a systems challenge, with supply chain resilience shaped by how well organizations connect software composition, exposure visibility, dependency governance, threat intelligence, build integrity, endpoint controls, remediation workflows, and validation. When those layers are treated separately, gaps open quickly; when they are tied together through a stronger operating model, teams are in a much better position to absorb faster discovery without losing control of the response.

For CISOs, that means continuing to use open source with a more deliberate view of dependency risk, reducing unnecessary packages where possible, knowing what is running and who owns it, and monitoring threat and vulnerability intelligence with enough discipline to act before the queue overwhelms the team. It also means paying closer attention to the attack surface across production, build, and developer environments, while treating AI as something that will amplify both the strengths and the weaknesses already present in the program. Faster discovery is here, and the organizations that handle it best will be the ones that can respond with the same level of discipline.

LinkedInFacebookXBluesky

Related blog posts