Last updated at Wed, 27 Sep 2017 18:38:28 GMT
This is a guest post from Shay Chen, an Information Security Researcher, Analyst, Tool Author and Speaker. The guy behind TECAPI , WAVSEP and WAFEP benchmarks.
Are social attacks that much easier to use, or is it the technology gap of exploitation engines that make social attacks more appealing?
While reading through the latest Verizon Data Breach Investigations Report, I naturally took note of the Web App Hacking section, and noticed the diversity of attacks presented under that category. One of the most notable elements was how prominent the use of stolen credentials and social vectors in general turned out to be, in comparison to "traditional" web attacks. Even SQL Injection (SQLi) - probably the most widely known (by humans) and supported attack vector (by tools) is far behind - and numerous application level attack vectors are not even represented in the charts.
Although it's obvious that in 2016 there are many additional attack vectors that can have a dire impact, attacks tied to the social element are still much more prominent, and the “traditional” web attacks being used all seem to be attacks supported out-of-the-box by the various scan engines out there.
It might be interesting to investigate a theory around the subject: are the attackers limited to attacks supported by commonly available tools? Are they further limited by the engines not catching up with the recent technology complexity?
With the recent advancements and changes in web technologies - single page applications, applications referencing multiple domains, exotic and complicated input vectors, scan barriers such as anti-CSRF mechanisms and CAPTCHA variations - even enterprise scale scanners have a hard time scanning modern application in a point-and-shoot scenario, and the typical single page application may require scan policy optimization to get it to work properly, let alone get the most out of the scan.
Running phishing campaigns still requires a level of investment/effort from the attacker, at least as much as the configuration and use of capable, automated exploitation tools. Attackers appear to be choosing the former and that's a signal that presently there is a better ROI for these types of attacks.
If the exploitation engines that attackers are using face the same challenges as vulnerability scanner vendors - catching up with technology - then perhaps the technology complexity for automated exploitation engines is the real barrier that makes the social elements more appealing, and not only the availability of credentials and the success ratio of social attacks.
How about testing it for yourself?
If you have a modern single-page application in your organization (Angular, React, etc), and some method of monitoring attacks (WAF, logs, etc), note:
- Which attacks are being executed on your apps?
- Which pages/methods and parameters are getting attacked on a regular basis, and which pages/methods are not?
- Are the pages being exempted technologically complex to crawl, activate or identify?
Maybe complexity isn't the enemy of security after all.