Last updated at Fri, 01 Dec 2023 20:38:46 GMT

The UK Home Office recently ran a Call for Information to investigate the Computer Misuse Act 1990 (CMA). The CMA is the UK’s anti-hacking law, and as Rapid7 is active in the UK and highly engaged in public policy efforts to advance security, we provided feedback on the issues we see with the legislation.

We have some concerns with the CMA in its current form, as well as recommendations for how to address them. Additionally, because Rapid7 has addressed similar issues relating to U.S. laws — particularly as relates to the U.S. equivalent of the CMA, the Computer Fraud and Abuse Act (CFAA) — for each section below, we’ve included a short comparison with U.S. law for those who are interested.

Restrictions on security testing tools and proof-of-concept code

One of the most concerning issues with the CMA is that it imperils dual-use open-source security testing tools and the sharing of proof-of-concept code.

Section 3A(2) of the CMA states:

(2) A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence under section 1, 3 or 3ZA.

Security professionals rely on open source and other widely available security testing tools that let them emulate the activity of attackers, and exploiting proof-of-concept code helps organizations test whether their assets are vulnerable. These highly valued parts of robust security testing enable organizations to build defenses and understand the impacts of attacks.

Making these products open source helps keep them up-to-date with the latest attacker methodologies and ensures a broad range of organizations (not just well-resourced organizations) have access to tools to defend themselves. However, because they’re open source and widely available, these defensive tools could still be used by malicious actors for nefarious purposes.

The same issue applies to proof-of-concept exploit code. While the intent of the development and sharing of the code is defensive, there’s always a risk that malicious actors could access exploit code. But this makes the wide availability of testing tools all the more important, so organizations can identify and mitigate their exposure.

Rapid7’s recommendation

Interestingly, this is not an unknown issue — the Crown Prosecution Service (CPS) acknowledges it on their website. We’ve drawn from their guidance, as well as their Fraud Act guidelines, in drafting our recommended response, proposing that the Home Office consider modifying section 3A(2) of the CMA to exempt “articles” that are:

  • Capable of being used for legitimate purposes; and
  • Intended by the creator or supplier of the article to be used for a legitimate purpose; and
  • Widely available; unless
  • The article is deliberately developed or supplied for the sole purpose of committing a CMA offense.

If you’re concerned about creating a loophole in the law that can be exploited by malicious actors, rest assured the CMA would still retain 3A(1) as a means to prosecute those who supply articles with intent to commit CMA offenses.

Comparison with the CFAA

This issue doesn't arise in the CFAA; however, the U.S. is subject to various export control rules that also restrict the sharing of dual-use security testing tools and proof-of-concept code.

Chilling security research

This is a topic Rapid7 has commented on many times in reference to the CFAA and the Digital Millennium Copyright Act, which is the U.S. equivalent of the UK’s Copyright, Designs and Patents Act 1988.

Independent security research aims to reveal vulnerabilities in technical systems so organizations can deploy better defenses and mitigations. This offers a significant benefit to society, but the CMA makes no provision for legitimate, good-faith testing. While Section 1(1) acknowledges that you must have intent to access the computer without authorization, it doesn’t mention that the motive to do so must be malicious, only that the actor intended to gain access without authorization. The CMA states:

(1) A person is guilty of an offence if—

a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer, or to enable any such access to be secured;

b) the access he intends to secure, or to enable to be secured, is unauthorised; and

c) he knows at the time when he causes the computer to perform the function that that is the case.

Many types of independent security research, including port scanning and vulnerability investigations, could meet that description. As frequently noted in the context of the CFAA, it’s often not clear what qualifies as authorization to access assets connected to the internet, and independent security researchers often aren’t given explicit authorization to access a system.

It’s worth noting that neither the National Crime Agency (NCA) or the CPS seem to be recklessly pursuing frivolous investigations or prosecutions of good-faith security research. Nonetheless, the current legal language does expose researchers to legal risk and uncertainty, and it would be good to see some clarity on the topic.

Rapid7’s recommendation

Creating effective legal protections for good-faith, legitimate security research is challenging. We must avoid inadvertently creating a backdoor in the law that provides a defense for malicious actors or permits activities that can create unintended harm. As legislators consider options on this, we strongly recommend considering the following questions:

  • How do you determine whether research is legitimate and justified? Some considerations include whether sensitive information was accessed, and if so, how much - is there a threshold for what might be acceptable? Was any damage or disruption caused by the action? Did the researcher demand financial compensation from the technology manufacturer or operator?

For example, in our work on the CFAA, Rapid7 has proposed the following legal language to indicate what is understood by “good-faith security research.”

The term "good faith security research" means good faith testing or investigation to detect one or more security flaws or vulnerabilities in software, hardware, or firmware of a protected computer for the purpose of promoting the security or safety of the software, hardware, or firmware.

(A) The person carrying out such activity shall

(i) carry out such activity in a manner reasonably designed to minimize and avoid unnecessary damage or loss to property or persons;

(ii)  take reasonable steps, with regard to any information obtained without authorization, to minimize the information the person obtains, retains, and discloses to only that information which the person reasonably believes is directly necessary to test, investigate, or mitigate a security flaw or vulnerability;

(iii) take reasonable steps to disclose any security vulnerability derived from such activity to the owner of the protected computer or the Cybersecurity and Infrastructure Security Agency prior to disclosure to any other party

(iv) wait a reasonable amount of time before publicly disclosing any security flaw or vulnerability derived from such activity, taking into consideration the following:

(I) the severity of the vulnerability,

(II) the difficulty of mitigating the vulnerability,

(III) industry best practices, and

(IV) the willingness and ability of the owner of the protected computer to mitigate the vulnerability;

(v) not publicly disclose information obtained without authorization that is

(I) a trade secret without the permission of the owner of the trade secret; or

(II) the personally identifiable information of another individual, without the permission of that individual; and

(vi) does not use a nonpublic security flaw or vulnerability derived from such activity for any primarily commercial purpose prior to disclosing the flaw or vulnerability to the owner of the protected computer or the [government vulnerability coordination body].

(B) For purposes of subsection (A), it is not a public disclosure to disclose a vulnerability or other information derived from good faith security research to the [government vulnerability coordination body].
  • What happens if a researcher does not find anything to report? Some proposals for reforming the CMA  have suggested requiring coordinated disclosure as a predicate for a research carve out. This only works if the researcher actually finds something worth reporting. What happens if they do not? Is the research then not defensible?
  • Are we balancing the rights and safety of others with the need for security? For example, easing restrictions for threat intel investigators and security researchers may create a misalignment with existing privacy legislation. This may require balancing controls to protect the rights and safety of others.

The line between legitimate research and hack back

In discussions on CMA reform, we often hear the chilling effect on security research being lumped in with arguments for expanding authorities for threat intelligence gathering and operations. The latter sound alarmingly like requests for private-sector hack back (despite assertions otherwise). We believe it is critical that policymakers understand the distinction between acknowledging the importance of good-faith security research on the one hand and authorizing private-sector hack back on the other.

We understand private-sector hack back to mean an organization taking intrusive action against a cyber-attacker on technical assets or systems not owned or leased by the entity taking action or their client. While threat intel campaigners may disclaim hack back, in asking for authorization to take intrusive action on third-party systems — whether to better understand attacks, disrupt them, or even recapture lost data — they're certainly satisfying the description of hack back and raising a number of concerns.

Rapid7 is strongly opposed to private-sector hack back. While we view both independent, good-faith security research and threat intelligence investigations as critical for security, we believe the two categories of activity need separate and distinct legal restrictions.

Good-faith security research is typically performed independently of manufacturers and operators in order to identify flaws or exposures in systems that provide opportunities for attackers. The goal is to remediate or mitigate these issues so that we reduce opportunities for attackers and decrease the risk for technology users. These activities often need to be undertaken without authorization to avoid blowback from manufacturers or operators that prioritize their reputation or profit above the security of their customers.

This activity is about protecting the safety and privacy of the many, and while researchers may take actions without authorization, they only do so on the technology of those ultimately responsible for both creating and mitigating the exposure. Without becoming aware of the issue, the technology provider and their users would continue to be exposed to risk.

In contrast, threat intel activities that involve interrogating or interacting with third-party assets prioritize the interests of a specific entity over those of other potential victims, whose compromised assets may have been leveraged in the attack. While threat intelligence can be very valuable in helping us understand how attackers behave — which can help others identify or prepare for attacks — data gathering and operations should be limited to assessing threats to assets that are owned or operated by the authorizing entity, or to non-invasive activities such as port scanning. More invasive activities can result in unintended consequences, including escalation of aggression, disruption or destruction for innocent third parties, and a quagmire of legal liability.

Because cyber attacks are criminal activity, if more investigation is needed, it should be undertaken with appropriate law enforcement involvement and oversight. We see no practical way to provide appropriate oversight or standards for the private sector to engage in this kind of activity.

Comparison to the CFAA

This issue also arises in the CFAA. In fact, it’s exacerbated by the CFAA enabling private entities to pursue civil causes of action, which mean technology manufacturers and operators can seek to apply the CFAA in private cases against researchers. This is often done to protect corporate reputations, likely at the expense of technology users who are being exposed to risk. These private civil actions chill security research and account for the vast majority of CFAA cases and lawsuit threats focused on research. One of Rapid7’s recommendations to the UK Home Office was that the CMA should not be updated to include civil liability.

Washington State has helped protect good-faith security research in its Cybercrime Act (Chapter 9A.90 RCW), which both addresses the issue of defining authorization and exempts white-hat security research.

It’s also worth noting that the U.S. has an exemption for security research in Section 1201 of the Digital Millennium Copyright Act (DMCA). It would be good to see the UK government consider something similar for the Copyright, Designs and Patents Act 1988.

Clarifying authorization

At its core, the CMA effectively operates as a law prohibiting digital trespass and hinges on the concept of authorization. Four of the five classes of offenses laid out in the CMA involve “unauthorized” activities:

1. Unauthorised access to computer material.

2. Unauthorised access with intent to commit or facilitate commission of further offences.

3. Unauthorised acts with intent to impair, or with recklessness as to impairing, operation of computer, etc.

3ZA.Unauthorised acts causing, or creating risk of, serious damage

Unfortunately, the CMA does not define authorization (or the lack thereof), nor detail what authorization should look like. As a result, it can be hard to know with certainty where the legal line is truly being drawn in the context of the internet, where many users don’t read or understand lengthy terms of service, and data and services are often publicly accessible for a wide variety of novel uses.

Many people take the view that if something is accessible in public spaces on the internet, authorization to access it is inherently granted. In this view, the responsibility lies with the owner or operator to ensure that if they don’t want to grant access to something, they don’t make it publicly available.

That being the case, the question becomes how systems owners and operators can indicate a lack of authorization for accessing systems or information in a way that scales, while still enabling broad access and innovative use of online services. In the physical world, we have an expectation that both public and private spaces exist. If a space is private and the owners don’t want others to access it, they can indicate this through signage or physical barriers (walls, fences, or gates). Currently, there is no accepted, standard way for owners and operators to set out a “No Trespassing” sign for publicly accessible data or systems on the internet that truly serves the intended purpose.

Rapid7’s recommendation

While a website’s Terms of Service (TOS) can be legally enforceable in some contexts, in our opinion the Home Office should not take the position that violations of TOS alone qualify as “unauthorized acts.” TOS are almost always ignored by the vast majority of internet users, and ordinary internet behavior may routinely violate TOS (such as using a pseudonym where a real name is required).

Reading TOS also does not scale for internet-wide scanning, as in the case of automated port scanning and other services that analyze the status of millions of publicly accessible websites and online assets. In addition, if TOS is “authorization” for the purposes of the CMA, it gives the author of the TOS the power to define what is and isn't a hacking crime under CMA section 1.

To address this lack of clarity, the CMA needs a clearer explanation of what constitutes authorization for accessing technical systems or information through the internet and other forms of connected communications.

Comparison with the CFAA

This issue absolutely exists with the CFAA and is at the core of many of the criticisms of the law. Multiple U.S. cases have rejected the notion that TOS violations alone qualify as “exceeding authorization” under the CFAA, creating a split in the courts. The U.S. Supreme Court’s recent decision on Van Buren v. United States confirmed TOS is an insufficient standard, noting that if TOS violations alone qualify as unauthorized act for computer crime purposes, “then millions of otherwise law-abiding citizens are criminals.”

Next steps

We hope the Home Office will take these concerns into consideration, both in terms of ensuring the necessary support for security testing tools and security research, and also in being cautious not to go so far with authorities that we open the door to abuses. We’ll continue to engage on these topics wherever possible to help policymakers navigate the nuances and keep advancing security.

You can read Rapid7’s full response to the Home Office’s CFI or our detailed CMA position.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.