Last updated at Tue, 08 Jan 2019 01:56:14 GMT
Happy National Cybersecurity Awareness month, everyone! This festive blog post was drafted entirely in a pumpkin patch. I am lost here and it is cold.
One of the major contributors to increasing and improving cybersecurity awareness is research that identifies vulnerabilities in technology and discloses them to the technology manufacturers and users so they can understand and mitigate the risk. This process is called coordinated vulnerability disclosure and handling (or "CVD processes" for short), and is something Rapid7 has commented on many-a-time. If you're unfamiliar with CVD processes and why they're important for both organization security and researchers, please see this previous post.
In this post, we aim to distinguish between three broad flavors of CVD processes based on authorization, incentives, and resources required. We also urge wider adoption of foundational processes before moving to more advanced and resource-intensive processes. Here are three general categories of CVD processes:
1. Unsolicited: The organization's CVD process includes a channel for receiving unsolicited vulnerability disclosures and resources to respond to the disclosures, but the organization does not authorize or incentivize researchers to look for security vulnerabilities;
2. Authorized: The organization's CVD process does authorize researchers to look for security vulnerabilities, but does not offer rewards to researchers; and
3. Incentivized: The organization's CVD process authorizes and rewards researchers to look for and disclose vulnerabilities—"bug bounties."
What prompts this post?
First, I am waiting to be rescued from this nightmarish labyrinth of pumpkin vines and corn stalks before hypothermia sets in. Second, CVD processes keep coming up in policy discussions without recognition or awareness of the basics, which may be an impediment to wider adoption of CVD processes.
Policymakers are pushing agencies to run before they walk when it comes to CVD processes. Following the success of "Hack the Pentagon," legislators now want to "Hack DHS" and "Hack the State Dept."—requiring, by law, that particular government agencies invite researchers to find vulnerabilities in their systems. While this might ultimately be a useful exercise, it also risks rushing individual agencies to more complex and resource-intensive processes while failing to ensure the fundamentals are in place for all agencies. The mighty Katie Moussouris has spoken out repeatedly on this very issue, and her work on the levels of maturity for CVD processes has heavily influenced our views.
Two recent government reports on CVD demonstrate strong understanding of the issues. First, the US Dept. of Justice's helpful Framework for a Vulnerability Disclosure Program for Online Systems, a detailed resource for organizations establishing a CVD process that authorizes research into certain systems. Second, the US House Energy & Commerce Committee issued The Criticality of Coordinated Disclosure In Modern Cybersecurity—a great report that explicitly notes the crucial distinction between authorized disclosures and incentivized disclosures (i.e., bug bounties). However, neither report delves into foundational CVD processes and resource requirement issues highlighted in this post.
Foundational CVD process: Communication and assessment
At its most foundational, an organization's CVD process can be 1) a public point of contact for vulnerability disclosures, like firstname.lastname@example.org, and 2) a process and resources for reviewing and responding to disclosures, including mitigation and communicating with external stakeholders such as the vulnerability reporter. There is no explicit authorization for researchers to probe the organization, and therefore no guarantee of legal liability protection for researchers that discover vulnerabilities. This is how Rapid7's own CVD process is structured, and we believe this type of process should be a regular feature of organizations’ cybersecurity programs.
The vast majority of organizations (even among large global companies) and government agencies do not have a public-facing means for external parties to report security vulnerabilities. So there is clearly a gap in adoption of CVD processes, even at this fundamental level, into organizations' cybersecurity programs. And organizations with limited security resources and expertise—which is most organizations—may simply not be prepared for a heavier volume of vulnerability disclosures.
This foundational CVD setup keeps an organization's overhead manageable while still providing a clear channel for vulnerability disclosures to the right internal staff. One benefit for security researchers, despite not having clear liability protection, is that using the channel demonstrates they are acting in good faith, and communicating with personnel tasked with handling security issues should help minimize misunderstandings, conflict, and ignored reports.
One might consider this flavor of CVD process to be "basic" since the lack of incentives likely results in fewer total disclosures, but we should not underestimate the resources required to safely receive, assess, mitigate, and communicate about unsolicited security vulnerability disclosures. If broad adoption of CVD processes is the goal, resource constraints must be considered.
Next level: Authorized research
A more advanced step for organizations is to overtly authorize researchers to look for vulnerabilities in specified assets. This is the model of CVD process that DHS would be required to establish agency-wide under legislation (H.R. 6735) that recently passed the US House of Representatives, and now awaits passage in the Senate. This is also the CVD flavor thoroughly described in the Dept. of Justice’s report. The big benefit of authorization for researchers is clearer legal protection from anti-hacking laws like the Computer Fraud and Abuse Act, so long as the researcher stays within the bounds of the authorization.
This CVD approach also requires more resources from the organization to establish the program and deal with disclosures. Authorizing research likely boosts the number of researchers probing the organization's' assets, thereby boosting the number of vulnerability disclosures to the organization, requiring more resources to assess and address. In addition, as the Dept. of Justice's CVD report details, organizations must carefully identify which specific assets researchers are authorized to probe, what types of techniques (i.e., phishing, DDOS) are authorized, and clearly outline the responsibilities and expectations for researcher conduct.
And what about those assets that the organization does not identify as authorized for security research? The organization should still establish the more foundational model above and prepare to receive unsolicited vulnerability disclosures that apply to the organization’s other assets.
For organizations with sufficient familiarity and resources, this more intensive authorized CVD process may be a good fit and uncover more hidden issues that the organization can resolve. For organizations that are not sufficiently prepared, the volume of disclosures and degree of planning required might be overwhelming. As Moussouris said, "If they can’t handle known vulnerabilities, how are they going to fare when the focus of all these hackers is going to pile on them?”
Yes, the hackers will close in like rural darkness in late October.
Final boss: Incentivizing research
A third distinction goes beyond mere authorization and motivates research with some type of prize—hoodies, shout-outs in security bulletins, or cash money —for finding and disclosing vulnerabilities. "Bug bounties" fall under this category too. Some legislation—the "Hack DHS Act"—would require DHS to adopt a bug bounty on a pilot basis, though this legislation has not made as much progress as H.R. 6735, which uses the authorization model described above.
This model adds a reward system to the complexities and resource requirements of authorization. As you might anticipate, an even greater volume of disclosures is likely as researchers compete for those sweet hoodies, those fine shout-outs. The greater the volume of disclosures, the more resources required to evaluate and respond to them—and if an organization incentivizes research but fails to follow through on the rewards, the organization runs a greater risk of reputational harm than not having an incentives program in the first place.
Many incentives programs end after a designated period. Like the authorized CVD process above, organizations will still need a foundational CVD process in place to receive unsolicited disclosures about remaining vulnerabilities. Just as a bug bounty is not a replacement for a comprehensive organizational cybersecurity program, a bug bounty does not replace the need for a foundational CVD process for systems and assets outside the scope of the bug bounty.
Driving more adoption of fundamental processes
Rapid7 —and evidently the House Energy & Commerce Committee— believes a CVD process should be a standard component of security programs in companies and government agencies. A great place to start would be establishing foundational CVD processes in federal government agencies. Federal agencies must consider it anyway as foundational CVD processes are now part of the NIST Cybersecurity Framework, and the White House directed agencies to use the NIST Framework to manage their cyber risk. While more advanced CVD processes can certainly be useful for organizations prepared for the workload, gaining widespread adoption for foundational processes should be a higher policy priority than spotty adoption of more advanced CVD processes.
And now I must go. Wispy light is quietly filtering through the trees and corn stalks of this lonely corner of pumpkin patch. Search party or apparition, I do not know, but I am compelled to unite with it. Happy Hallow-Cyber-Awareness-Ween...