Last updated at Tue, 25 Jul 2017 15:45:37 GMT

We've been reading Dan Auerbach's post with interest here at Rapid7.  With the activity on Capitol Hill surrounding CyberSecurity, I thought it'd be worthwhile to share his post and discuss:

Security for the 99%:

“Fundamentally, it's very simple: fewer software vulnerabilities means more security. Once a vulnerability is patched and an upgraded version of software is available and in use, that increases safety for all of us. Ensuring that the right mechanisms are in place to maximize this baseline security should be a major focus area of any organized effort to secure our critical and other Internet infrastructure. This means encouraging the disclosure of vulnerabilities when they are found so that they can be fixed, and no longer exploited.”

It's hard to argue with his basic point that companies should implement effective vulnerability management as a core tenet of their security program, and that if companies did this broadly it would enhance security both individually and collectively. I agree, though I'd put a few other things alongside it as high priority baseline activities, such as configuration assessment & management, broad hardening including enabling DEP for high frequency applications, security awareness training, and identity management (see Critical Threats and Incredible Hype: Reflections on Security in 2011 and What Matters in 2012). Most security practitioners will probably agree that these are a core part of any best practices baseline.

I'm less convinced by some of Dan's other points though. He categorically opposes information sharing on the grounds of potential privacy intrusions if misused. While I agree with the potential for abuse, I think the pertinent issue is one of specificity and bounds, not rejecting the concept of sharing completely. I believe that specific information sharing is clearly in the public interest and can lead to better security for everyone. For example, sharing detailed information about actual intrusions and intrusion attempts between private and public entities can broadly enhance security: what were the methods used, which addresses did attacks come from, which addresses are used in C&C traffic, what can we infer about the actors, etc. I'm sure it won't be easy to get the right bounds and protections in place, but there's clearly value in figuring it out. On the other hand, if the “information sharing” concept is left vague, there's plenty of room for potential abuse as Dan points out.

Interestingly, while opposing most information sharing, Dan proposes a specific type of information sharing that I suspect won't be palatable to the affected government agencies: mandatory vulnerability sharing with the vendor involved with the issue. It's hard to deny that it's in the public good, since it would lower the number of security flaws in commercially-shipping products, but since it interferes with some of the classified agencies' offensive missions I don't hold out much hope for seeing it in the legislation. In today's environment where we're seeing more and more comparisons drawn between “cyber” capabilities as the “weapons” of the future (in contrast to traditional defense capabilities like tanks and planes as the weapons of the past), I expect there are a number of interests in the government that will fight hard to keep that out.

Take a moment to give it a read, and, as always, I welcome your thoughts in the comments.