Last updated at Wed, 30 Aug 2017 01:05:01 GMT
Last week, President Obama proposed a number of bills to protect consumers and the economy from the growing threat of cybercrime and cyberattacks. Unfortunately in their current form, it's not clear that they will make us more secure. In fact, they may have the potential to make us more INsecure due to the chilling effect on security research. To explain why, I've run through each proposed bill in turn below, with my usual disclaimer that I'm not a lawyer.
Before we get into the details, I want to start by addressing the security community's anger and worry over the proposal, particularly the Law Enforcement Provisions piece. The community is right to be concerned about the proposals in their current form, but there is some good news here, and an important opportunity for both the Government and the security community.
Firstly, it's a positive sign that both the President and Congress are prioritizing cybersecurity and we're seeing scrutiny and discussion of the issues. There seems to be alignment between the Government and the security community on a few things too: for example, I think we agree that there needs to be more collaboration and transparency, and a stronger focus on preventing and detecting cyberattacks. Creating consistency for data breach notification is also a sensible measure.
Lastly, I'm excited to see the Computer Fraud and Abuse Act (CFAA) being opened for updating. Yes, the current proposal raises a number of serious concerns, but so does the version that is actively being prosecuted and litigated today. The security research community has wanted to see updates to the CFAA for a long time, and this is our opportunity to engage in that process and influence legislators to make changes that really WILL make us more secure.
The Critical Role of Research
One thing I want to applaud in the President's position is the focus on prevention. Specifically, the Administration is advocating sharing information on threat indicators to create best practices and help organizations mount a defense. This is certainly important. Understanding attackers and their methods is something we talk about a great deal at Rapid7, and we definitely agree it's a critical part of a company's security program.
If we want to prevent attacks though, we need to do and know more. Opportunities for attackers abound across the internet within the technology itself, making effective defense an almost impossible task. Addressing this requires a shift in focus so we are building security into the very fabric of our systems and processes. We need flaws, misconfigurations, and vulnerabilities to be identified, analyzed, discussed, and addressed as quickly as possible. This is the only way we can meaningfully reduce the opportunity for attackers and increase cybersecurity.
Yet, at present, we do not encourage or support researchers and enable them to be effective. Rather, legislation like the CFAA creates confusion and fear, discouraging research efforts. Too often we see companies use this and other legislation as a stick to threaten (or beat) researchers with.
(One thing to note here is that when I use the term “researcher,” I am referring variously to security professionals, enthusiasts, tinkerers, and even Joe Internet User, who accidently stumbles on a vulnerability or misconfiguration discovery. It's not easy to define what a security researcher is, which is one of the challenges with building a legislative carve out for them.)
The defensive position described above is generally driven by conscientious business concerns for stability, revenue, reputation and corporate liability. Though understandable, in the long term this approach only increases the risk exposure for the business and their customers. We need to change this status quo and create more collaboration between security experts and businesses if we want to prevent and effectively respond to cyberattacks.
Updating the Computer Fraud and Abuse Act
There's a lot to discuss in the CFAA proposal, much of which raises concerns, so I'm just going to go through it all in turn as it appears in the proposal.
-
SEC 101 - Prosecuting Organized Crime Groups That Utilize Cyber Attacks
This is actually an amendment to the Racketeer Influenced and Corrupt Organizations Act (RICO), which basically allows for the leaders of a criminal organization to be tried for the crimes undertaken by others within that enterprise. The amendment proposed adds violations of the CFAA (1030) as acts that can be subject to RICO. The concern with this is that the definition of “enterprise” is incredibly broad:
“Enterprise” includes any individual, partnership, corporation, association, or other legal entity, and any union or group of individuals associated in fact although not a legal entity;
The security industry is built on interaction and information sharing in online communities. We help each other tackle difficult technical challenges and make sense of data on a regular basis. If this work can be interpreted as an act of conspiracy, it will undermine our ability to effectively collaborate and communicate.
For a more specific example, let's consider Metasploit, which is an open source penetration testing framework designed to enable organizations to test their security against attacks they may experience in the wild. Rapid7 runs Metasploit, so if a Metasploit module is used in a crime, would that make the leadership of Rapid7, a party to that crime? Would other Metasploit contributors also be implicated? This concern is just as valid for any other open source security tool.
-
SEC 103 – Modernizing the Computer Fraud and Abuse Act
(a)(2)(B) – In response to requests from the legal and academic communities, and circuit splits over prosecutions, this amendment aims to clarify that a violation of Terms of Service IS a crime under the CFAA if the actor:
This is a big concern. Firstly, there is a general sense of disquiet over the idea of businesses being able to set and amend law as easily as they set and amend Terms of Service.
From a security research point of view though, there is more to why this is concerning. It is highlighted in the definitions section:
This essentially means any research activity a business does not like becomes illegal. And you have to know the organization has banned it. Now while that does create a burden on the organization to state this (in their Terms of Service), it effectively means the end of internet-wide scanning efforts, which can be hugely valuable in identifying threats and understanding the reach and impact of issues.
The qualifiers are supposed to address this point, but do little to help. An organization can easily claim that the value of information uncovered in a research effort was more than $5,000. And government systems will, and should be, included in an internet-wide research effort.
(a)(6) – This is another area of serious concern:
There are four key parts to this:
1) “Willfully” – Interestingly, this word seems to be the Administration's attempt to introduce intent as a means of drawing a line between criminal acts and those that might appear the same but are actually bona fide. In other words, this piece might be key to separating research from criminal activity.
The problem lies in the definition of “willfully,” and the concept of “prosecutorial discretion”. The amendment defines “willfully” as follows:
"Intentionally to undertake an act that the person knows to be wrongful”
Unfortunately this definition begs for another definition. What does “wrongful” mean? A company embarrassed by a research disclosure could argue that the researcher intentionally caused injury to its reputation and customer confidence and was therefore wrongful.
Regarding “prosecutorial discretion” – there are a lot of prosecutors, and they vary greatly in their level of technical and security understanding, and their reasons for pursuing cases. It's the anomaly cases – the occasional extremes of questionable prosecutions – that drive the most press coverage, giving a somewhat distorted view of the idea of prosecutorial discretion in the security community. As a result, there is little trust between prosecutors and security researchers, and it only takes the POSSIBILITY of prosecution for research to be chilled. In addition, the CFAA is both criminal and civil legislation, and we have seen motivated organizations take a more aggressive approach with their application of this law.
2) “Traffics” – this word is incredibly broadly defined. For example, in his blog post on the CFAA proposal, security researcher Rob Graham imagines a scenario in which an individual could be prosecuted for retweeting a tweet containing a link to data dump. Coupled with the lack of clarity around having an intent to do wrong, there is a concern that the security community will be penalized purely for being inherently snarky, highly active on the internet, and interested in security information.
3) “Any other means of access” – In some cases this could refer to the public disclosure of a vulnerability as it could be argued that provides a “means of access”. If a researcher provides a proof of concept exploit for the vulnerability to highlight how it works, this would very likely be considered a means of access, likewise with exploit code provided for penetration testing purposes. This language effectively makes research disclosure pretty much illegal.
4) “Knowing or having reason to know that a protected computer would be accessed or damaged without authorization” – You can make an argument that anyone in the security community always has this knowledge. We know that there are cybercriminals and that they are attacking systems across the internet. If we disclose research findings, we know that some criminals may try to use them to their advantage, but we believe many are ALREADY using the vulnerability to their advantage without security professionals having a chance to defend against them. This is why disclosure is so important – it enables (sometimes forces) issues to be addressed so organizations can protect themselves and their customers.
(c) – Penalties. The amendment increases the penalties for CFAA violations. For researchers concerned about the issues above, the harsher penalties increase the risk and further discourages them from undertaking research or disclosing findings.
So all in all, there are some serious concerns in the CFAA, which can be summarized as it potentially chilling security research to such a degree that it would seriously undermine US businesses and the internet as a whole. That sounds melodramatic, and perhaps it is, but I've heard from a great many researchers that the proposal would stop them conducting research altogether. The risk level would simply be too great.
One challenge with finding the right approach for the CFAA is that research often looks much like nefarious activity and it's hard to create law that allows for one and criminalizes the other. This is a challenge we MUST address.
The Personal Data Notification and Protection Act
We see a similar problem emerge with thePersonal Data Notification and Protection Act, which aims to create consistency and clarity around breach notification requirements across the entire country (as currently there are different laws on this in 47 individual States). The challenge here again resides in where research may look like a breach.
If a researcher accesses SPII in the course of finding and investigating a vulnerability, then once disclosed to the business in question, the company will need to go through the customer disclosure process. In order to protect themselves from this, we could see organizations discouraging researchers from testing their systems by taking a hard line stance that they will be prosecuted.
Apart from this concern, I'm generally supportive of creating consistency for breach notification requirements. This has been needed for a while and it should help both businesses and consumers to better understand their rights and what is expected of them in the case of a security incident.
The specifics of the bill look reasonable: 30 days for notification from discovery of the incident is not optimal, but is OK. The terms laid out for exemptions and the parameters on the kind of data that represents SPII seem fair. I do agree though with the EFF's assertion that it would be better if this law were:“A ‘floor,' not a ‘ceiling,' allowing states like California to be more privacy protective and not depriving state attorneys general from being able to take meaningful action.”
Information Sharing
The emphasis on transparency of the breach notification piece was also present in an Information Sharing proposal; the latest in a long line of bills focused on encouraging the private sector to share security information with the Government. This proposal specifically asks for “cyber threat indicator” information to be shared via the National Cybersecurity and Communications Integration Center (NCCIC), and offers limited liability as an inducement.
This feels like the least impactful of the three bills to me. It's a voluntary program and offers no hugely compelling incentive for participation, other than liability limitation. This supposes that the only barrier to information sharing currently is fear of liability, and I'm not convinced that's accurate. For example, it's often the case that organizations don't know what's going on in their environment from a security point of view, and don't know what to look for or where to start. A shortage of security skills exacerbates this problem.
In addition, while I do think sharing this kind of information is potentially very valuable, I'm not convinced that doing so through the Government is the most efficient or effective way to realize this value, and I definitely don't think it promotes collaboration between security and business leaders. In fact, I'm concerned that it could create a privileged class of information that is tightly controlled, rather than open and accessible to all.
One thing this proposal does raise for me though is a question around liability. At present, in the relationship between a vendor and researcher, all the liability rests on the shoulders of the researcher (and this weight increases under the new CFAA proposal). They carry all the risk and one false move or poorly worded communication and they can be facing a criminal or civil action. The proposed information sharing bill doesn't address that, but it got me thinking… if the Government is prepared to offer limited liability to organizations reporting cybersecurity information, perhaps it could do something similar for researchers disclosing vulnerabilities….
Will the President's Cybersecurity Proposal Make Us More Secure?
If you've stuck with me this far, thank you and well done. As I said at the start of this piece, it's good to see cybersecurity being prioritized and discussed by the Government. I hope something good will come of it, and certainly I think data breach notification and information sharing are important positive steps if handled correctly.
But as I've stated numerous times through this piece, I believe our best and only real chance for addressing the security challenge is identifying and fixing our vulnerabilities. The bottom line is that we can't do that if researchers don't want to conduct or disclose research for fear of ending up in prison or losing their home.
Again, it's important to remember that these are only the initial proposals and we're now entering a period of consultation where various Congressional and Senate committees will be looking at the goals and evaluating the language to see what will work and what won't.
It's critical that the security community participates in this process in a constructive way. We need to remember that we are more immersed in security than most, and share our expertise to ensure the right approach is taken. We share a common goal with the Government: improving cybersecurity. We can only do that by working together.
Let me know what you think and how you're getting involved.