Anthropic’s Project Glasswing has sparked plenty of discussion about what AI might soon do for vulnerability discovery, but the more useful question for most security teams is how to prepare for, and more importantly seize the opportunity of, what comes next.
As we wrote in our earlier blog, What Project Glasswing Means for Security Leaders, AI is becoming more capable of finding software flaws. The pressure that follows lands on the teams responsible for deciding what matters, validating risk, assigning ownership, and getting remediation moving across environments that were already hard to manage. We believe that the organizations that will benefit most from the next wave of AI will be the ones that understand their environment well enough to use these emerging AI models with intent, rather than layering them onto immature processes and hoping that speed alone will solve the backlog.
What this moment means for security teams
The number of publicly tracked software vulnerabilities has broken records almost every year over the last decade, while supply chain risk has continued to rise. Most teams were already feeling the strain of more findings than they could process cleanly. The Common Vulnerabilities and Exposures (CVE) program, the standard system for identifying and tracking known vulnerabilities, recorded 48,185 disclosures in 2025, a 20% increase over 2024, with roughly 40% of those disclosed vulnerabilities rated high or critical.
The pace in 2026 was already working out to hundreds of new CVEs per day when those figures were cited. That tells you something important about the current environment: the challenge has not necessarily been a lack of findings, but instead converting a growing stream of findings into measurable risk reduction.
The reality is that very few organizations are going to hand a model free rein over their most sensitive environments the minute those capabilities become more widely available. Trust will be built in stages: early adoption is much more likely to focus on backlog reduction, triage support, patch testing, and repetitive lower-tier remediation work that consumes time without carrying the same level of operational risk as the most critical systems in the business. That is a more realistic starting point, and it leads to a more useful question. Before teams apply AI more broadly, they need to understand their environment well enough to use it intentionally.
Establish the foundation before layering in AI
The promise from Project Glasswing and almost every other AI-powered security initiative is quite similar: leverage AI to identify patterns, summarize risk, suggest fixes, and speed up repetitive work. Regardless of technology, success still depends on how well an organization understands its environment, the context around each finding, and the process used to act on it.
A model can generate more output than a team ever could on its own, but that output becomes noise if the organization cannot answer basic questions about scope, ownership, criticality, and exposure. Teams need a clear, continuously updated picture of the environment before they can decide where AI should be applied, what should remain human-led, and which parts of the backlog are safe to push through more automated workflows.
The AI landscape is already shifting fast, and it will keep shifting, which is why this moment should prompt a more preemptive and resilient strategy rather than another round of tooling hype. Chasing each new capability as it arrives will inevitably force teams to keep reorganizing around the latest announcement. A stronger path is to get the foundation right first - understand the environment, the attack paths, and the assets that matter most; but most importantly, establishing the process and the people behind making these decisions. Then use AI where it meaningfully improves speed, consistency, and focus.
Why Attack Surface Management should be part of that foundation
A strong foundation starts with visibility. Security teams need a live picture of what exists in the environment, what is exposed, how assets connect to one another, and which systems carry the greatest business impact if something goes wrong. That is where Attack Surface Management becomes central. Rapid7’s approach through Surface Command is built around a continuous view of the attack surface across the digital estate, which helps teams understand where exposures sit and how they relate to internet-facing, business-critical, or otherwise high-impact systems.
That matters for AI adoption just as much as it matters for day-to-day security operations. Teams cannot apply AI strategically if they are guessing about which parts of the environment are lower priority, which assets belong to which owners, or where a newly disclosed flaw could create real business risk. A better view of the attack surface gives organizations the context they need to segment the problem properly. That makes it far easier to start with the right use cases, whether that is backlog reduction in lower-impact systems, targeted prioritization of exposed assets, or faster triage where the risk picture is already well understood.
Ownership is part of that foundation too. Remediation slows down when no one can quickly identify who owns the affected application, environment, or workflow. Security teams already lose time there today, and AI will only make that bottleneck more visible if it starts surfacing issues faster than organizations can assign them. Attack Surface Management helps turn that ambiguity into something more actionable by tying exposure to environment context and likely ownership.
How Vulnerability and Exposure Management turns visibility into action
Once the environment is understood, teams still need a way to move from findings to outcomes. That is where Vulnerability and Exposure Management becomes the operating layer that keeps the work grounded.
The biggest value here is not simply collecting more vulnerability data. It is targeted prioritization and validation. When a disclosure lands, teams need to know whether the issue affects an exposed asset, whether there is evidence of exploitation or attacker interest, whether the impacted system is business-critical, and whether existing controls already reduce some of the risk. That is the kind of context that helps organizations decide what deserves immediate attention and what can be handled through a normal remediation cycle.
This is where artificial intelligence can help move remediation forward faster. Instead of asking teams to manually connect exploit signals, asset criticality, and vulnerability intelligence on their own, AI can distill that context directly in the remediation workflow. That makes it easier to understand why an issue matters, what the likely impact is, and what to do next, which shortens the gap between discovery and a confident decision on how to respond.
We expect most organizations to use AI to assist with, or in some cases take over, lower-tier triage, backlog cleanup, summary generation, and patch support in areas where the workflow is already established and the blast radius is more manageable. Human experts still stay closest to the most critical business logic, the most sensitive environments, and the most complex remediation paths. That is a practical adoption model, and it only works when the organization already has enough structure in place to know where those boundaries are.
Curated vulnerability intelligence changes the quality of decisions
That kind of deliberate adoption only works when teams can make better decisions, faster. Security teams need more than severity scores and a long list of CVEs. They need enough context to understand what matters, what can wait, and where action will reduce real risk fastest. As Rapid7 outlined in The Power of Curated Vulnerability Intelligence, the goal is to identify the vulnerabilities that actually matter and give teams enough context to act with confidence.
That intelligence provides a form of validation that most teams need badly as disclosure volume rises. It helps answer whether a finding is tied to active attacker interest, whether proof-of-concept activity is public, whether the asset is exposed, and whether delaying a patch creates unacceptable risk. It also supports the decisions that happen in the gap between discovery and full remediation. When a patch is delayed because of change controls, testing constraints, or lack of a vendor fix, teams still need to reduce exposure. Curated intelligence helps them decide whether to use segmentation, access restrictions, configuration changes, added monitoring, or virtual patching while the longer-term fix is being worked through.
That is one of the clearest ways Rapid7 helps customers move from data to outcomes. Intelligence is fused into the workflow so teams can prioritize with more precision and validate their actions against real threat context, not just generalized scores.
How runtime and remediation fit into the broader AI story
There is another part of this story that matters as organizations think more seriously about AI-driven security operations. As AI shapes the way teams handle exposures earlier in the lifecycle, context of application at runtime matters more too.
To make that foundation complete, organizations need to look beyond static posture and bring runtime validation into the picture. When teams can identify which vulnerabilities and misconfigurations are actively exploitable in production, and map sensitive data and identity access to real-world attack paths, they get a much clearer view of actual risk. Security teams need to understand what is vulnerable, how systems behave when live, and where unusual activity may suggest a problem is moving toward exploitation. With that runtime context in place, teams can spend less time chasing theoretical vulnerabilities and more time focusing on the exposures that are actively creating risk in live environments.
That connection between exposure, intelligence, remediation, and runtime behavior is where AI starts to become genuinely useful rather than simply impressive. It supports a more intentional model of security decision-making, one that narrows the gap between what is found, what matters, and what happens next.
What security leaders should do now
This is a good time for security leaders to step back and ask a more disciplined set of questions.
Do we understand our environment well enough to direct AI toward the right problems?
Can we clearly separate higher-risk, higher-impact assets from the parts of the backlog that are mostly operational drag?
Is threat intelligence embedded in how we interpret findings, or are we still depending too heavily on raw severity?
Can we identify ownership fast enough for AI-assisted triage to result in meaningful action?
Are compensating controls part of the plan when remediation cannot happen immediately?
Those questions shape the quality of everything that follows.
Glasswing creates a real opportunity for security teams that are ready to use AI with more intention. AI can move work forward faster, reduce manual drag, and absorb classes of issues that currently consume time without improving outcomes. The teams that benefit most will not be the ones that rush to apply new models everywhere. They will be the ones that understand their environment, have a clear view of their attack surface, have mature enough workflows to apply AI where it makes sense, and can measure whether the actions taken actually reduced exposure.
Rapid7’s approach to building resilience is grounded in those same needs. Attack Surface Management provides the environmental foundation, Vulnerability Management drives prioritization and action, curated vulnerability intelligence strengthens validation and decision-making, AI-generated remediation insights compress the time from discovery to the next step, and runtime security adds context where live behavior matters. Together, those pieces help customers build a security program that is ready for AI rather than constantly reacting to it.




