Last updated at Fri, 24 Apr 2020 17:22:44 GMT

On our latest episode of Security Nation, we caught up with Casey Ellis, founder and CTO at Bugcrowd. Joining us during the 2020 RSA Conference, he takes the time to discuss normalizing vulnerability disclosure, the safe harbor debate, and the legal implications of crowdsourced security testing.

Rethinking penetration testing

Prior to founding Bugcrowd in 2012, Casey spent seven years in penetration testing, eventually launching his own pen test startup in Australia. With Bugcrowd, he shifted focus to private crowdsourced testing, creating a platform for vulnerability disclosure and bug bounty programs.

Harnessing the collective creative power of the white-hat community, crowdsourced security testing can meet changing security threats in real-time. Monitoring behavior on the Bugcrowd platform from a human resources perspective delivers more tailored private crowdsourced testing solutions. But Casey notes it also deepens security research by identifying who is hacking and for what purpose. Where are they located? Are they novice or professional? Are they trustworthy?

Part of Casey’s mission is to use hacker, or researcher, behavior on Bugcrowd’s public platforms to improve private security. This, in turn, can help normalize vuln disclosure in the private sphere—helping to promote the idea that hackers aren’t just thieves, they’re guardians. In what Casey describes as a “milestone,” BugCrowd received a letter from the Department of Justice in support of this research.

Don’t neglect the fine print

Increased contribution, collaboration, and adoption hinge on the meta-goal of normalizing disclosure. Casey aims to educate the typical user to view crowdsourced security testing as a “neighborhood watch” for the internet—capable of outpacing and informing legal measures in the still-murky realm of cybersecurity regulation.

Simplifying security communications, such as Terms of Service (TOS), is key. Casey stresses that dismissing TOS and user agreements as lengthy legalese no one reads endangers compliance. A better approach would be to view TOS as a valuable opportunity for companies to engage directly with users and compare intra-organizational goals. How can we articulate a consistent position on evolving security threats? What’s the best way to ensure non-native English speakers understand what they’re agreeing to?

While security, legal, and marketing teams work in tandem to draft TOS, their guiding concerns don’t often neatly align. Security and marketing tend to favor plain language, keeping content digestible and transparent while increasing consumer engagement. Legal teams, on the other hand, seek to minimize corporate risk. They gravitate to wording that’s nuanced with regard to case history, but far less intelligible to the average reader.

One solution is to develop boilerplate TOS language that is sharable between IT and legal teams. Bugcrowd offers a general template for U.S. clients, which acknowledges and clarifies hacking laws, circumvention laws, the Digital Millennium Copyright Act (DCMA), and its global equivalents. This reduces the typical back-and-forth that comes with hashing out the fine print.

Another legal issue involves the distinction (or lack thereof) between hacker and researcher. Casey notes that the Computer Fraud and Abuse Act (CFAA) assumes “bad faith” or criminal intentions on part of anyone who attempts unauthorized access by default—which essentially erases the white-hat community.

This is because the CFAA doesn’t define consent. When it comes to the internet, the concept of consent quickly becomes tricky. If you publish something online and make it publicly accessible, isn’t consent implied? How should we distinguish between exceeding user authorization and accessing without authorization?

With the increased varieties of bounty and vuln disclosure, it’s unclear how long a definition presupposing bad faith can remain legally tenable. Still, Casey doubts that the CFAA will change in the near future. The issue touches on mens rea, or the ability to prove criminal intent, which is notoriously difficult to demonstrate.

But leveraging the Bugcrowd platform can help. Establishing patterns in researcher behavior provides tangible data that helps inform distinctions between hackers and researchers. The idea is to continue to keep researchers safe from unjust legal repercussions and let them continue doing what they do best—testing system vulnerabilities for the purposes of correction.

Finding a safe harbor

The debate continues about the trade-offs involved in offering hackers safe harbor. If the demand from hackers is complete access and freedom to do what they want, with immunity from prosecution, they can expect pushback from many a corporate lawyer. Companies want to know they’re protected, too—and not creating legal loopholes or backdoor entry points. When tech vendors relax their permissions excessively, it’s easy for anyone to pose as a researcher and create chaos. Casey points out that EFF and other digital rights groups acknowledge it’s difficult to guarantee ethical hackers full safe harbor.

Partial safe harbor is more moderate, if less well defined. Casey jokingly likens full and partial safe harbor to driving in California, where speed limits might be interpreted as “speed recommendations.”

Shifting or unclearly communicated parameters for acceptable activities can lead to confusion. If a policy includes a list of permitted activities, for instance, is anything outside those activities necessarily off-limits? Existing models can differ, sometimes wildly—what’s cool in Bounty Land could be shady in disclose.io. Negotiating a middle ground that leaves room for good-faith hacking protections in company policy remains a work in progress.

Listen to the full interview

We’d like to thank Casey for sharing his insights with the Rapid7 community. Check out the full interview and remember to subscribe so you don’t miss out on future episodes of Security Nation.