Rapid7 views independent cybersecurity research and the security community as important drivers for advancing cybersecurity for all, a core value for Rapid7. One way we take action on this value is by supporting protection for security researchers acting in good faith. We have spoken out on this issue in the context of the Computer Fraud and Abuse Act (CFAA) consistently for several years, as well as Section 1201 of the Digital Millennium Copyright Act (DMCA), the Wassenaar Arrangement export controls, and coordinated vulnerability disclosure adoption, among others. Throughout this advocacy, we have endeavored to balance meaningful protection for independent security research with commonsense security needs and the boundaries of responsible behavior.
This post continues this advocacy, and continues to balance those considerations, as we try our hand at drafting protection for security research under the CFAA. Our proposed CFAA security research defense, which is subject to change based on feedback, is below and also available here.
The proposed security research protection would limit the exposure of security researchers to civil lawsuits if the researchers do no harm, act in accordance with best practices, and are motivated to strengthen cybersecurity. This affirmative defense is directed at security research only, and is not intended to be a solution to the CFAA’s other problems, such as the precise limits of contract-based authorization to access computers, or protection against insider threats after the Supreme Court’s recent Van Buren ruling.
We are also clear-eyed about the challenges in placing any limit, delicate or not, on an anti-cybercrime law in an era of very serious and growing cybercrime. Nonetheless, while Rapid7 supports anti-cybercrime laws as necessary to deter and punish malicious actions, we also believe such laws should provide for more certainty and less risk of liability for innocuous or beneficial actions.
Why it isn’t simple to just exempt “good” hackers
Security practitioners must think like attackers. Assets must be protected against known workarounds from those trying to fool the defenses with malicious intent, and layered defenses avoid creating a single point of failure. Believe it or not, policymakers must think like attackers as well. Laws and policies with too many loopholes will be ineffective, so clarifications and caveats are often necessary to prevent gaming the system. Both regulations and cybersecurity safeguards can be complicated for the same reasons!
The idea of legal protection for good faith security research from the Computer Fraud and Abuse Act (CFAA) suffers from this problem. Cyber criminals will claim to have good intent to confound prosecution or civil lawsuits, and CFAA liability protection should not shield security researchers that act recklessly and cause harm.
Adding to the challenge is the important issue of appropriate boundaries for interacting with another person’s property. The CFAA is almost exclusively aimed at protecting computers that the researcher does not own. This calls for a much narrower security research exemption than we would support under DMCA Sec. 1201, which restricts research on computers the researcher does own.
Striking the balance
To account for these considerations, we dove deep into the bowels of the CFAA and targeted the provisions most at risk of being used in questionable ways against legitimate good faith security researchers: 18 USC 1030(g) and (c)(4)(A)(i)(I). This combo forms the basis for private lawsuits under the CFAA for any “loss” exceeding $5,000. In our estimation, the threat of private lawsuits against legitimate security researchers is more common than criminal prosecution. However, civil lawsuits could still proceed under several circumstances outlined in the CFAA, such as if the case involved physical injury, public health, or safety threats - namely, the other factors listed in 1030(c)(4)(A)(i).
Then, taking some cues from the DMCA Sec. 1201 protections for researchers, we define “good faith security research.” This includes steps that legitimate researchers would take (as distinguished from attackers or reckless actors), such as minimizing any damage and making an effort to disclose any discovered vulnerabilities to the computer owner. We also prohibited activities that a researcher motivated solely by profit, rather than improving security, would likely take, such as using any vulnerability derived from the research for commercial activity prior to disclosure.
The result is… admittedly complex! But also inclusive of safeguards to curb disingenuous misuse of the defense while providing appropriately scoped protection from federal anti-hacking laws for researchers acting responsibly to detect and disclose security vulnerabilities. Please feel free to let us know what you think - the proposed language is below and also available here as a more reader-friendly pdf.
Proposed security researcher protection under CFAA
Annotated with [endnotes]. Proposed language is in italics.
18 USC 1030 – Fraud and related activity in connection with computers
(g) Any person who suffers damage or loss by reason of a violation of this section may maintain a civil action  against the violator to obtain compensatory damages and injunctive relief or other equitable relief. A civil action for a violation of this section may be brought only if the conduct involves 1 of the factors set forth in subclauses (I), (II), (III), (IV), or (V) of subsection (c)(4)(A)(i). Damages for a violation involving only conduct described in subsection (c)(4)(A)(i)(I) are limited to economic damages. No action may be brought under this subsection unless such action is begun within 2 years of the date of the act complained of or the date of the discovery of the damage. No action may be brought under this subsection for the negligent design or manufacture of computer hardware, computer software, or firmware.
- (1) Affirmative defense – It shall be an affirmative defense to a civil action under this subsection for conduct that involves subclause (c)(4)(A)(i)(I)  that the defendant acted solely for the purpose of good faith security research.
(e) As used in this section –
(13) The term "good faith security research" means good faith testing or investigation to detect one or more security vulnerabilities in software, hardware, or firmware of a protected computer for the purpose of promoting the security or safety of the software, hardware, or firmware.
(A) The person carrying out such activity shall
(i) carry out such activity in a manner reasonably designed to minimize and avoid unnecessary damage or loss to property or persons;
(ii) take reasonable steps, with regard to any information obtained without authorization, to minimize the information the person obtains, retains, and discloses to only that information which the person reasonably believes is directly necessary to test, investigate, or mitigate a security vulnerability;
(iii) take reasonable steps to disclose any security vulnerability derived from such activity to the owner of the protected computer or the Cybersecurity and Infrastructure Security Agency prior to disclosure to any other party;
(iv) wait a reasonable amount of time before publicly disclosing any security vulnerability derived from such activity, taking into consideration the following:
- (I) the severity of the vulnerability,
- (II) the difficulty of mitigating the vulnerability,
- (III) the willingness and ability of the owner of the protected computer to mitigate the vulnerability; and
- (IV) international standards and industry best practices;
(v) not publicly disclose information obtained without authorization that is
- (I) a trade secret as defined under 18 USC 1839(3), without the permission of the owner of the trade secret; or
- (II) the personally identifiable information of another individual, without the permission of that individual; and
(vi) not use a nonpublic security vulnerability derived from such activity for any primarily commercial purpose prior to disclosing the vulnerability to the owner of the protected computer or the Cybersecurity and Infrastructure Security Agency.
(B) Nothing in subsection (e)(13) shall be construed to prohibit or require public disclosure of security vulnerabilities derived from good faith security research.
(C) For purposes of subsection (e)(13), it is not a public disclosure to disclose a vulnerability or other information derived from good faith security research to the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.
(D) For purposes of subsection (e)(13), the term "security vulnerability" shall have the meaning given the term in 6 USC 1501(17).
 (g): The CFAA provides for both civil and criminal liability. Under existing law, subsection (g) applies only to civil actions, not criminal prosecution. The proposed researcher defense, in italics, would be under (g), so the defense would only apply to civil actions.
 (g): Under existing law, to bring a civil action under subsection (g) for a CFAA violation, the violation must involve one of these five factors listed in 1030(c)(4)(A)(i):
- I) loss to 1 or more persons during any 1-year period (and, for purposes of an investigation, prosecution, or other proceeding brought by the United States only, loss resulting from a related course of conduct affecting 1 or more other protected computers) aggregating at least $5,000 in value;
- II) the modification or impairment, or potential modification or impairment, of the medical examination, diagnosis, treatment, or care of 1 or more individuals;
- III) physical injury to any person;
- IV) a threat to public health or safety;
- V) damage affecting a computer used by or for an entity of the United States Government in furtherance of the administration of justice, national defense, or national security.
 (g)(1): This applies the researcher protection to civil suits involving the first factor listed above: economic loss to one or more persons aggregating at least $5,000 over one year. A civil suit may still proceed if the violation involves any of the other four factors: potential modification or impairment of medical care, physical injury, threat to public safety, or damage affecting certain US government computers.
 (g)(1): The researcher protection would apply solely to good faith security research, but not if the researcher has other purposes, such as hacktivism, sale of vulnerabilities, stock shorts, malicious intent, etc. The phrase "solely for the purpose of good faith security research" is included in the Library of Congress' 2018 exemption for security research to Sec. 1201 of the Digital Millennium Copyright Act (DMCA). See 37 CFR 201.40(b)(11)(i).
 (e)(13): The 2018 exemption for security research to DMCA Sec. 1201 uses a similar language: "for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, [...] and where the information derived from the activity is used primarily to promote the security or safety of the class of devices or machines on which the computer program operates, or those who use such devices or machines[...]" See 37 CFR 201.40(b)(11)(ii).
 (e)(13)(A)(i): To be eligible for the defense, researchers would be need to take reasonable steps to ensure they do not cause any harm while accessing or damaging computers without authorization. The phrase "reasonably designed" signifies that the researcher should take objectively reasonable steps to prevent harm, but also recognizes that the researcher may not be responsible for unreasonably improbable risks. This provision also draws from the phrase "carried out in an environment designed to avoid any harm to individuals or the public" in the 2018 security research exemption to DMCA Sec. 1201. See 37 CFR 201.40(b)(11)(ii).
 (e)(13)(A)(ii): Researchers would need to take care to avoid collecting or disclosing data in excess of what is needed for security research. For example, public disclosure of an entire database is not necessary to correct a security flaw in the means of access to the database. This applies only to information obtained without authorization under CFAA during the course of the research, and not other types of information.
 (e)(13)(A)(iii): An act of security research may not detect a vulnerability, and the researcher defense should apply even in the absence of a finding of a vulnerability. However, if a security vulnerability is detected or derived from the research, the researcher must endeavor to first disclose it to the computer owner or CISA. This applies only to vulnerabilities derived from research subject to the CFAA security research defense, not vulnerabilities derived from other research.
 (e)(13)(A)(iv): While the provision does not prohibit public disclosure, it requires the researcher to wait a reasonable amount of time before doing so. This provision includes factors to consider in determining what is reasonable. Similar language is included in the Vulnerability Disclosure Policy of the Dept. of Defense. See HackerOne, DoD Vulnerability Disclosure Policy, Nov. 21, 2016, https://hackerone.com/deptofdefense.
 (e)(13)(A)(v): This is to prevent public disclosures of sensitive personal or business information. Private disclosure is not prohibited, but note that (e)(13) and (A)(ii) require that the researcher use this information solely for security and reasonably believe this information is necessary to strengthen security. Like (A)(ii), this applies only to information obtained without authorization under CFAA.
 (e)(13)(A)(vi): This is to prevent use of the security research defense to gain a commercial advantage in cybersecurity products or services before the results of that research are disclosed. The goal is to avoid a scenario whereby a researcher finds a vulnerability and privately incorporates it into a commercial security product to gain a competitive edge, rather than disclose the vulnerability to strengthen security more broadly. Like (A)(iii), this applies only to vulnerabilities derived from research subject to the CFAA security research defense.
 (e)(13)(D): As defined by the Cybersecurity Information Sharing Act of 2015, the term “security vulnerability” means any attribute of hardware, software, process, or procedure that could enable or facilitate the defeat of a security control.