Incremental improvement is great. Nothing, especially in the world of software, is perfect when first released to the market, so iterative improvement is an expectation every customer must have. But problems begin to arise for users when incremental improvement becomes the accepted norm for long periods of time. Many experts in the vulnerability management market believe that is what's happened in the industry: vendors continuously spit out minimal, albeit important, updates such as a new report format or rebranding a scanner as an ‘agent'. Unfortunately for all of us, when this happens over several years, security teams are slowed - one might say trapped - by their existing solutions, forced to get creative to work around them. As a big part of the vulnerability management community, we wanted to take the time to talk through these trappings and why it's NOW time to stop accepting them.
It's often hard to tell what's happening right now
In most organizations, the vulnerability management program involves a combination of two or more teams and a bevy of activities in multiple stages between asset discovery and remediation. When this process was first implemented, no one imagined it would reach this level of complexity, but as is the case with any cross-departmental system, each moving part adapted, as necessary, to account for the massive growth in workload. And yet, the technology in use by these teams has not made any dramatic changes to accommodate the way a modern, effective program operates.
When you're not the person handling one of the many tasks along this process, it can feel like you're the only one doing anything:
- If you're on the security team, actively prioritizing what was found in the latest scan and seeing vulns that you swear were sent to remediation last week, you wonder if it'll ever get done. When you ask about it, you always hear something that sounds (to your biased ears) a lot like what Val said in Tremors: “We plan ahead, that way we don't do anything right now…”
- On the other side of this process, you have a great deal of work outside of security-related patching and configuration changes, so you learn to tune out the frequent notifications because each claims to be the most critical action you could take all week. You take each ticket you're assigned and plan it appropriately alongside everything else. You know it'll get done and you don't have time to give constant updates on every item.
This feeling of “being in it alone” only worsens when you're provided outdated information and you lose thirty minutes chasing down the facts. It's one thing to have to be handed a list of new assets for remediation every week, but when that list is inaccurate and you have to figure out the real list, it doesn't exactly thrill you to start that work instead of more concrete activities. What must be kept in mind is that frustrations around outdated information aren't limited to one party here; when the security team opens new tickets or raises outstanding ones only to find out the patch was applied the day after a scan, they realize they've lost some of their coworkers' trust. When you're frustrated like this, you don't care that it was the technology's fault.
Not knowing if you're vulnerable to an attack makes waiting for the results excruciating
A major reason outdated information is too often used is the regular “cascade of waiting for results." The most extreme version of the window of wait is what has been experienced during the trend of announcing 0-days with a cool logo, marketing-approved name, and immediate Twitter storm, causing the following sequence of waiting events:
You read about it on Twitter --> check your vendor's blogs to see what they've said --> wait for the email update to arrive --> wait until your next scan window --> wait until the scan completes
It isn't until the end of this InfoSec version of the Jupiter Ascending scene where Advocate Bob goes from room to room to confirm Jupiter's gene sequence that you get the chance to review the results - and even he who is designed for bureaucracy is visibly frustrated by the end. Then, the right member of the team pulls the report and writes the necessary details into a ticket before starting the waiting once again. You wait until the next scan completes to see if this new headline-grabbing vuln has been eradicated before the next time the executive team meets, since the only security question in every newspaper is sure to be raised.
Small new injections can lead to immediate confusion and tearing up the plan
It's this stage between scan results and a confirmed remediation that's had the least support from technology to date. It's bad enough that teams have to track progress for thousands of actions with spreadsheets, but that only covers the ideal scenarios when newly discovered exposures can be resolved after those already assigned. Your team probably operates more often in a world where some new vulnerabilities take precedence over what you knew the week before. After all, what good is a live view of your exposure surface area if the owner of the master remediation's spreadsheet is constantly rewriting the plan until he wants to tear it all down like everyone's favorite Burger Shack employee in Harold & Kumar Go To White Castle.
There is so much activity between the moment a vulnerability is discovered and it's been effectively mitigated that security professionals typically have to list Microsoft Excel skills on their resumes to qualify for a job. This may have been a “good enough” solution for the first few years, but spreadsheets just don't suffice for a workflow in which injections are the norm. You wouldn't expect your software development team to track every task in this manner, so don't accept it for the security team who expects the plan to change much more often.
If new risks are typical, your technology needs to take them in stride
Why can vulnerability management programs be as painful as described above? There are multiple reasons, but most of them come from one root cause: the process was built around limitations in the technology of yesterday. Let's go through a quick list:
- Passive and continuous scanning consumed too much bandwidth, so scan windows were set for times when they wouldn't impact productivity
- Agents evoked management nightmares and endpoint freezing visions from the antivirus era, so new approaches to agents were largely ignored
- Present-day processing and analytics technologies couldn't be added to legacy solutions without demanding more hardware, so reporting was the only option to explore results
- The results were written in the language of CVEs, exploits, and CIS benchmarks, so the IT department needed everything habitually translated for their tickets and workflow
Security teams need to push for better. Better technologies. Better approaches. Better support for today's reality.
Nexpose Now is the culmination of years of conversations with our customers, ranging from on-site interviews about their daily annoyances through clickable prototypes and the longest, most iterative beta programs in Rapid7's history. It started when we launched Adaptive Security to take you from discovering systems to being informed of their exposure as soon as they come online. It now extends to watching live dashboards update as soon as a remote laptop across the globe installs vulnerable software (with the agent technology we first released with InsightIDR now in Limited Availability for Nexpose) and tracking its remediation along its entire path from assignment through fix using our new Remediation Workflow (Beta).
While you're here, go check out what we've done with Nexpose Now.