Last updated at Wed, 15 May 2019 14:54:23 GMT

The first post in this series kicked off our history series on the development of web application firewalls, with a discussion of what the earliest technology was capable of. Early WAFs were based on pattern recognition. That made them fast, but it also made it easy for attackers to sidestep the rigid patterns that were the building blocks of the first-gen WAF.

If the problem is that stone age WAFs have stateless rules, then the obvious next step is to add state into WAF processing. Instead of simple input and output filters, the front-end protection on the app becomes a stateful filter that maintains context across multiple requests. The stateful filter could combine information from the request along with information from the post-processing response. This approach continues to treat the app as a black box, and the security can be provided independently of the underlying technology. As long as the underlying application uses the same protocol (which with the prevalence of web services, most applications are built on HTTP transports), the filter can provide security coverage across a wide range of technologies.

Stateful second-generation WAFs were an important advance. Rather than treating requests and responses as fixed items, these products can modify incoming requests and outgoing responses. By maintaining state across requests, it was possible for WAF designers to implement controls to limit data exfiltration and impose rate limits on requests and the size of outgoing responses.

Maintaining state across multiple requests also assisted with anomalous behavior detection. For a simple example, consider a web application that services incoming requests. If the typical behavior is that each IP address sends requests infrequently, then an address that makes hundreds of requests in a minute can be labeled as an anomaly, though early stateful WAFs generated anomaly alerts so frequently that it was not possible to build a process that investigated every single alert in detail.

The disadvantage of maintaining state in the WAF is that the state has to model the state in the application to provide deep protection. Or, more formally, deep protection requires that the WAF correctly model application execution to make predictions of its behavior. In practice, changes to the application to add new features change the way it holds and processes internal state, requiring that security teams get on the treadmill of tweaking the WAF’s model of the application in response to ongoing development. Yesterday’s safe requests can be today’s exploits, and yesterday’s exploits can be today’s hot new feature. Securing the entire application stack of hypervisors, VMs, host operating systems, and databases (arguably, even including the client browser, too) from injection, cross-site scripting and request forgery, deserialization, and anything else that might be dreamed up is hard. In fact, it’s so hard that it’s called the halting problem and is one of the fundamental problems of study in computer science. (It’s also why the Internet was built using the end-to-end principle and largely avoided keeping complex state in the network.)

For how we escaped from trying to model black-box behavior, stay tuned for the next installment!