Last updated at Tue, 26 Sep 2017 14:42:29 GMT
This is a guest post from Art Manion, Technical Manager of the Vulnerability Analysis Team at the CERT Coordination Center (CERT/CC). CERT/CC is part of the Software Engineering Institute at Carnegie Mellon University.
October is National Cyber Security Awareness month and Rapid7 is taking this time to celebrate security research. This year, NCSAM coincides with new legal protections for security research under the DMCA and the 30th anniversary of the CFAA - a problematic law that hinders beneficial security research. Throughout the month, we will be sharing content that enhances understanding of what independent security research is, how it benefits the digital ecosystem, and the challenges that researchers face.
The CERT/CC has been doing coordinated vulnerability disclosure (CVD) since 1988. From the position of coordinator, we see the good, bad, and ugly from vendors, security researchers and other stakeholders involved in CVD. In this post, I'm eventually going to give some advice to security researchers. But first, some background discussion about sociotechnical systems, the internet of things, and chilling effects.
While there are obvious technological aspects of the creation, discovery, and defense of vulnerabilities, I think of cybersecurity and CVD as sociotechnical systems. Measurable improvements will depend as much on effective social institutions ("stable, valued, recurring patterns of [human] behavior") as they will on technological advances. The basic CVD process itself -- discover, report, wait, publish -- is an institution concerned with socially optimal [PDF] protective behavior. This means humans making decisions, individually and in groups, with different information, incentives, beliefs, and norms. Varying opinions about "optimal" explain why, despite three decades of debate and both offensive and defensive technological advances, vulnerability disclosure remains a controversial topic.
To add further complication, the rapid expansion of the internet of things has changed the dynamics of CVD and cybersecurity in general. Too many "things" have been designed with the same disregard to security associated with internet-connected systems of the 1980s, combined with the real potential to cause physical harm. The Mirai botnet, which set DDoS bandwidth records in 2016, used an order of magnitude fewer username and password guesses than the Morris worm did in 1988. Remote control attacks on cars and implantable medical devices have been demonstrated. The stakes involved in software security and CVD are no longer limited to theft of credit cards, account credentials, personal information, and trade or national secrets.
In pursuit of socially optimal CVD and with consideration for the new dynamics of IoT, I've become involved in two policy areas: defending beneficial security research and defining CVD process maturity. These two areas intersect when researchers chose CVD as part of their work, and that choice is not without risk to the researcher.
The security research community has valid and serious concerns about the chilling effects, real and perceived, of legal barriers and other disincentives [PDF] to performing research and disclosing results. On the other hand, there is a public policy desire to differentiate legitimate and beneficial security research from criminal activity. The confluence of these two forces leads to the following conundrum: If rules for security researchers are codified -- even with broad agreement from researchers, whose opinions differ -- any research activity that falls out of bounds could be considered unethical or even criminal.
Codified rules could reduce they grey area created by "exceeds authorized access" and the steady supply of new vulnerabilities (often discovered by accessing things in interesting and unexpected ways). But with CVD, and most complex interactions, the exception is the rule and CVD is full of grey. Honest mistakes, competing interests, language, time zone, and cultural barriers, disagreements and other forms of miscommunication are commonplace. There's still too little information and too many moving parts to codify CVD rules in fair and effective way.
Nonetheless, I see value in improving the quality of vulnerability reports and CVD as an institution, so here is some guidance for security researchers who choose the CVD option. Most of this advice is based on my experience at the CERT/CC for the last 15 years, which is bound to include some personal opinion, so caveat lector.
- Be aware. In three decades of debate, a lot has been written about vulnerability disclosure. Read up, talk to your peers. If you're subject to U.S. law and read only one reference, it should be the EFF's Coders' Rights Project Vulnerability Reporting FAQ.
- Be humble. Your vulnerability is not likely the most important out of 14,000 public disclosures this year. Please think carefully before registering a domain for your vulnerability. You might fully understand the system you're researching, or you might not, particularly if the system has significant, non-traditional-compute context, like implantable medical devices.
- Be confident. If your vulnerability is legit, then you'll be able to demonstrate it, and others will be able to reproduce. You're also allowed to develop your reputation and brand, just not at a disproportionate cost to others.
- Be responsible. CVD is full of externalities. Admit when you're wrong, make clarifications and corrections.
- Be concise. A long, rambling report or advisory costs readers time and mental effort and is usually an indication of a weak or invalid report. If you've got a real vulnerability, demonstrate it clearly and concisely.
- PoC||GTFO. This one goes hand in hand with being concise. Actual PoC may not be necessary, but provide clear evidence of the vulnerable behavior you're reporting and steps for others to reproduce. Videos might help, but they don't cut it by themselves.
- Be clear. Both with your intentions and your communications. You won't reach agreement with everyone, particularly about when to publish, but try to avoid surprises. Use ISO 8601 date/time formats. Simple phrasing, avoid idiom.
- Be professional. Professionals balance humility, confidence, candor, and caution. Professionals don't need to brag. Professionals get results. Let your work speak for itself, don't exaggerate.
- Be empathetic. Vendors and the others you're dealing with have their own perspectives and constraints. Take some extra care with those who are new to CVD.
- Minimize harm. Public disclosure is harmful. It increases risk to affected users (in the short term at least) and costs vendors and defenders time and money. The theory behind CVD is that in the long run, public disclosure is universally better than no disclosure. I'm not generally a fan of analogies in cybersecurity, but harm reduction in public health is one I find useful (informed by, but different than this take). If your research reveals sensitive information, stop at the earliest point of proving the vulnerability, don't share the information, and don't keep it. Use dummy accounts when possible.
In a society increasingly dependent on complex systems, security research is important work, and the behavior of researchers matters. At the CERT/CC, much of our CVD effort is focused on helping others build capability and improving institutions, thus, the advice above. We do offer a public CVD service, so if you've reached an impasse, we may be able to help.
Art Manion is the Technical Manager of the Vulnerability Analysis team in the CERT Coordination Center (CERT/CC), part of the Software Engineering Institute at Carnegie Mellon University. He has studied vulnerabilities and coordinated responsible disclosure efforts since joining CERT in 2001. After gaining mild notoriety for saying "Don't use Internet Explorer" in a conference presentation, Manion now focuses on policy, advocacy, and rational tinkering approaches to software security, including standards development in ISO/IEC JTC 1 SC 27 Security techniques. Follow Art at @zmanion and CERT at @CERT_Division.