Last updated at Wed, 07 Apr 2021 18:33:03 GMT
This post is the ninth in the series, "The 12 Days of HaXmas."
2015 was a big year for cybersecurity policy and legislation; thanks to the Sony breach at the end of 2014 year, we kicked the new year off with a renewed focus on cybersecurity in the US Government. The White House issued three legislative proposals, held a cybersecurity summit, and signed a new Executive Order, all before the end of February. The OPM breach and a series of other high profile cybersecurity stories continued to drive a huge amount of debate on cybersecurity across the Government sphere throughout the year. Pretty much every office and agency is building a position on this topic, and Congress introduced more than 100 bills referencing “cybersecurity” during the year.
So where has the security community netted out at the end of the year in terms of policy and legislation? Let's recap some of the biggest cybersecurity policy developments…
Cybersecurity Information Sharing
This was Congress' top priority for cybersecurity legislation in 2015 and the TL;DR is that an info sharing bill was passed right before the end of the year. The idea of an agreed legal framework for cybersecurity information sharing has merit; however, the bill has drawn a great deal of fire over privacy concerns, particularly with regard to how intelligence and law enforcement agencies will use and share information.
The final bill was the result of more than five years of debate over various legislative proposals for information sharing, including three separate bills that went through votes in 2015 (two in the House and one in the Senate). In the end, the final bill was agreed through a conference process between the House and Senate, and included in the Omnibus Appropriations Act.
Despite this being the big priority for cybersecurity legislation, a common view in the security community seems to be that this is unlikely to have much impact in the near term. This is partly because organizations with the resources and ability to share cybersecurity information, such as large financial or retail organizations, are already doing so. The liability limitation granted in the bill means they are able to continue to do this with more confidence. It's unlikely to draw new organizations into the practice as participation has traditionally centered more on whether the business has the requisite in-house expertise, and a risk profile that makes security a headline priority for the business, rather than questions of liability. For many organizations that strongly advocated for legislation, a key goal was to get the government to improve its processes for sharing information with the private sector. It remains to be seen whether the legislation will actually help with this.
Right to Research
For those that have read any of my other posts on legislation, you probably know that protecting and promoting research is the primary purpose of Rapid7's (and my) engagement in the legislative arena. This year was an interesting year in terms of the discussion around the right to research…
The DMCA Rulemaking
The Digital Millennium Copyright Act (DMCA) prohibits the circumvention of technical measures that control access to copyrighted works, and thus it has traditionally been at odds with security research. Every three years, there is a “rulemaking” process for the DMCA whereby exemptions to the prohibition can be requested and debated through a multi-phase public process. All granted exemptions reset at this point, so even if your exemption has been passed before, it needs to be re-requested every three years. The idea of this is to help the law keep up with the changing technological landscape, which is sensible, but the reality is a pretty painful, protracted process that doesn't really achieve the goal.
In 2015, several requests for security research exemptions were submitted: two general ones, one for medical devices, and one for vehicles. The Library of Congress, who oversees the rulemaking process, rolled these exemption requests together and at the end of October it announced approval of an exemption for good faith security research on consumer-centric devices, vehicles, and implantable medical devices. Hooray!
Don't get too excited though – the language in the announcement sounded slightly as though the Library of Congress was approving the exemption against its own better judgment and with some heavy caveats, most notably that it won't come in to effect for a year (apart from for voting machines, which you can start researching now). More on that here.
Despite that, this is a positive step in terms of acceptance for security research, and demonstrates increased support and understanding of its value in the government sector.
The Computer Fraud and Abuse Act (CFAA) is the main anti-hacking law in the US and all kinds of problematic. The basic issues as relates to security research can be summarized as follows:
- It's out of date – it was first passed in 1986, and despite some “updates” since then, it feels woefully out of date. One of the clearest examples of this is that the law talks about access to “protected computers,” which back in '86 probably meant a giant machine with an actual fence and guard watching over it. Today it means pretty much any device you use that is more technically advanced than a sliderule.
- It's ambiguous – the law hinges on the notion of “authorization” (you're either accessing something without authorization, or exceeding authorized access), yet this term is not defined anywhere, and hence there is no clear line of what is or is not permissible. Call me old fashioned, but I think people should be able to understand what is covered by a law that applies to them…
- It contains both civil and criminal causes of action. Sadly, most researchers I know have received legal threats at some point. The vast majority have come from technology providers rather than law enforcement; the civil causes of action in the CFAA provide a handy stick for technology providers to wield against researchers when they are concerned about negative consequences of a disclosure.
The CFAA is hugely controversial, with many voices (and dollars spent) on all sides of the debate, and as such efforts to update it to address these issues have not yet been successful.
In 2015 though, we saw the Administration looking to extend the law enforcement authorities and penalties of the CFAA as a means of tackling cybercrime. This focus found resonance on the Hill, resulting in the development of the International Cybercrime Prevention Act, which was then abridged and turned into an amendment that its sponsors hoped to attach to the cybersecurity information sharing legislation. Ultimately, the amendment was not included with the bill that went to the vote, which was the right outcome in my opinion.
The interesting and positive thing about the process though was the diligence of staff in seeking out and listening to feedback from the security research community. The language was revised several times to address security research concerns. To those who feel that the government doesn't care about security research and doesn't listen, I want to highlight that the care and consideration shown by the White House, Department of Justice, and various Congressional offices through discussions around CFAA reform this year suggests that is not universally the case. Some people definitely get it, and they are prepared to invest the time to listen to our concerns and get the approach right.
It's Not All Positive News
Despite my comments above, it certainly isn't all plain sailing, and there are those in the Government that fear that researchers may do more harm than good. We saw this particularly clearly with a vehicle safety bill proposal in the second half of the year, which would make car research illegal. Unfortunately, the momentum for this was fed by fears over the way certain high profile car research was handled this year.
The good news is that there were plenty of voices on the other side pointing out the value of research as the bill was debated in two Congressional hearings. As yet, this bill has not been formally introduced, and it's unlikely to be without a serious rewrite. Still, it behooves the security research community to consider how its actions may be viewed by those on the outside – are we really showing our good intentions in the best light? I have increasingly heard questions arise in the government about regulating research or licensing researchers. If we want to be able to address that kind of thinking in a constructive way to reach the best outcome, we have to demonstrate an ability to engage productively and responsibly.
Following high profile vulnerability disclosures in 2014 (namely, Heartbleed and Shellshock), and much talk around bug bounties, challenges with multi-party coordinated disclosures, and best practices for so called “safety industries” – where problems with technology can adversely impact human health and safety, so e.g. medical devices, transportation, power grids etc. – the topic of vulnerability disclosure was once again on the agenda. This time, it was the Obama Administration taking an interest, led by the National Telecommunications and Information Administration (NTIA, part of the Department of Commerce). They convened a public multi-stakeholder process to tackle the thorny and already much-debated topic of vulnerability disclosure. The project is still in relatively early stages, and could probably do with a few more researcher voices, so get involved! One of the inspiring things for me is the number of vendors that are new to thinking about these things and are participating. Hopefully we will see them adopting best practices and leading the way for others in their industries.
At this stage, participants have split into four groups to tackle multiple challenges: awareness and adoption of best practices; multi-party coordinated disclosure; best practices for safety industries; and economic incentives. I co-lead the awareness and adoption group with the amazing Amanda Craig from Microsoft, and we're hopeful that the group will come up with some practical measures to tackle this challenge. If you're interested in more information on this issue specifically, you can email us.
Thanks to the Wassenaar Arrangement, in 2015, export controls became a hot topic in the security industry, probably for the first time since the Encryption Wars (Part I).
The Wassenaar Arrangement is an export control agreement amongst 41 nation states with a particular focus on national security issues – hence it pertains to military and dual use technologies. In 2013, the members decided that should include both intrusion and surveillance technologies (as two separate categories). From what I've seen, the surveillance category seems largely uncontested; however, the intrusion category has caused a great deal of concern across the security and other business communities.
This is a multi-layered concern – the core language that all 41 states agreed to raises concerns, and the US proposed rule for implementing the control raises additional concerns. The good news is that the Bureau of Industry and Security (BIS) – the folks at the Department of Commerce that implement the control – and various other parts of the Administration, have been highly engaged with the security and business communities on the challenges, and have committed to redrafting the proposed rule, implementing a number of exemptions to make it livable in the US, and opening a second public comment period in the new year. All of which is actually kind of unheard of, and is a strong indication of their desire to get this right. Thank you to them!
Unfortunately the bad news is that this doesn't tackle the underlying issues in the core language. The problem here is that the definition of what's covered is overly broad and limits sharing information on exploitation. This has serious implication for security researchers, who often build on each other's work, collaborate to reach better outcomes, and help each other learn and grow (which is also critical given the skills shortage we face in the security industry). If researchers around the world are not able to share cybersecurity information freely, we all become poorer and more vulnerable to attack.
There is additional bad news: more than 30 of the member states have already implemented the rule and seem to have fewer concerns over it, and this means the State Department, which represents the US at the Wassenaar discussions, are not enthusiastic about revisiting the topic and requesting the language be edited or overturned. The Wassenaar Arrangement is based on all members arriving at consensus, so all must vote and agree when a new category is added, meaning the US agreed to the language and agreed to implement the rule. From State's point of view, we missed our window to raise objections of this nature and it's now our responsibility to find a way to live with the rule. Dissenters ask why the security industry wasn't consulted BEFORE the category was added.
The bottom line is that while the US Government can come up with enough exemptions in their implementation to make the rule toothless and not worth the paper it's written on, it will still leave US companies exposed to greater risk if the core language is not addressed.
As I mentioned, we've seen excellent engagement from the Administration on this issue and I'm hopeful we'll find a solution through collaboration. Recently, we've also seen Congress start to pay close attention to this issue, which is also likely to help move the discussion forward:
- In December, 125 members of the House of Representatives signed a letter addressed to the White House asking them to step into the discussion around the intrusion category. That's a lot of signatories and will hopefully encourage the White House to get involved in an official capacity. It also indicates that Wassenaar is potentially going to be a hot topic for Congress in 2016.
- Reflecting that, the House Committee on Homeland Security, and the House Committee on Oversight and Government Reform are joining forces for a joint hearing on this topic in January.
The challenges with the intrusion technology category of the Wassenaar Arrangement highlight a hugely complex problem: how do we reap the benefits of a global economy, while clinging to regionalized nation state approaches to governing that economy? How do you apply nation state laws to a borderless domain like the internet? There are no easy answers to these questions, and we'll see the challenges continue to arise in many areas of legislation and policy this year.
In the US, there was talk of a federal law to set one standard for breach notification. The US currently has 47 distinct state laws setting requirements for breach notification. For any businesses operating in multiple states, this creates confusion and administrative overhead. The goal for those that want a federal breach notification law is to simplify this by having one standard that applies across the entire country. In principle this sounds very sensible and reasonable. The problem is that the federal legislative process does not move quickly, and there is concern that by making this a federal law, it will not be able to keep up with changes in the security or information landscape, and thus consumers will end up worse off than they are today. To address this concern, consumer protection advocates urge that the federal law not pre-empt state law that sets a higher standard for notification. However, this does not alleviate the core problem any breach notification bill is trying to get at – it just adds yet another layer of confusion for businesses. So I suspect it's unlikely we'll see a federal breach notification bill pass any time soon, but I wouldn't be surprised if we see this topic come up again in cybersecurity legislative proposals this year.
Across the pond, there was an interesting development on this topic at the end of the year – the European Union issued the Network and Information Security Directive, which, amongst other things, requires that operators of critical national infrastructure must report breaches (interestingly, this is kind of at odds with the US approach, where there is a law protecting critical infrastructure from public disclosure). The EU directive is not a law – member states will now have to develop their own in-country laws to put this into practice. This will take some time, so we won't see a change straight away. My hope is that many of the member states will not limit their breach notification requirements to only organizations operating critical infrastructure – consumers should be informed if they are put at risk, regardless of the industry. Over time, this could mark a significant shift in the dialogue and awareness of security issues in Europe; today there seems to be a feeling that European companies are not being targeted as much as US ones, which seems hard to believe. It seems likely to me that a part of the reason we don't hear about it so much because the breach notification requirement is not there, and many victims of attacks keep it confidential.
This was a term I heard a great deal in government circles this year as policy makers tried to come up with ways of encouraging organizations to put basic security best practices in place. The kinds of things that would be on the list here would be patching, using encryption, that kind of thing. It's almost impossible to legislate for this in any meaningful way, partly because the requirements would likely already be out of date by the time a bill passed, and partly because you can't take a one-size-fits-all approach to security. It's more productive for Governments to take a collaborative, educational approach and provide a baseline framework that can be adapted to an organization's needs. This is the approach the US takes with the NIST Framework (which is due for an update in 2016), and similarly CESG in the UK provides excellent non-mandated guidance.
There was some discussion around incentivizing adoption of security practices – we see this applied with liability limitation in the information sharing law. Similarly, there was an attempt at using this carrot to incentivize adoption of security technologies. The Department of Homeland Security (DHS) awarded FireEye certification under the SAFETY Act. This law is designed to encourage the use of anti-terrorism technologies by limiting liability for a terrorist attack. So let's say you run a football stadium and you deploy body scanners for everyone coming on to the grounds, but someone still manages to smuggle in and set off an incendiary device; you could be protected from liability because you were using the scanners and taking reasonable measures to stop an attack. In order for organizations to receive the liability limitation, the technology they deploy must be certified by DHS.
Now when you're talking about terrorist attacks, you're talking about some very extreme circumstances, with very extreme outcomes, and something that statistically is rare (tragically not as rare as it should be). By contrast, cybercrime is extremely common, and can range vastly in its impact, so this is basically like comparing apples to flamethrowers. On top of that, using a specific cybersecurity technology may be less effective than an approach that layers a number of security practices together, e.g. setting appropriate internal policies, educating employees, patching, air gapping etc. Yet, if an organization has liability limitation because it is deploying a security technology, it may feel these other measures are unnecessarily costly and resource hungry. So there is a pretty reasonable concern that applying the SAFETY Act to cybersecurity may be counter-productive and actually encourage organizations to take security less seriously than they might without liability limitation.
None-the-less, there was a suggestion that the SAFETY Act be amended to cover cybersecurity. Following a Congressional hearing on this, the topic has not raised its head again, but it may reappear in 2016.
Unless you've been living under a rock for the past several months (which might not be a terrible choice all things considered), you'll already be aware of the intense debate raging around mandating backdoors for encryption. I won't rehash it here, but have included it because it's likely to be The Big Topic for 2016. I doubt we'll see any real resolution, but expect to see much debate on this, both in the US and internationally.