4 Ways Legacy WAF Fails to Protect Your Apps
The web application firewall (WAF) is a long standing technology that has been handed down from generation to generation from datacenter to cloud to serverless — yet it’s rarely effective and largely disliked. In the land of web application security, there are a few not-so-well-kept secrets, arguably none bigger than this: the legacy WAF has survived not by being excellent, but by being mandated.
The legacy WAF has survived not by being excellent, but by being mandated.
In this post, we’ll explore the history of the WAF, including why it’s ubiquitous, even though it often doesn’t fully serve its purpose — and what you can do about it.
The legacy WAF: an antiquated technology
Traditional, perimeter-based legacy WAFs are based on antiquated technology that was created to help stem the rise of application security vulnerabilities, which had been overwhelming organizations with their frequency of discovery. Static and dynamic analysis tools dumped huge quantities of bugs onto development teams that had no possible way to fix so many code issues.
Meant as a stop gap measure to address this problem, the legacy WAF made it possible to filter out rampant SQLi, command execution and XSS attacks that were threatening to consume all of the development and security team’s available resources. The idea behind the legacy WAF was that the application bugs and security flaws would be triaged for now, and then eventually fixed in the code at the root of the problem.
At the time, a drop-in web application security filter seemed like a good idea. Sure, it sometimes led to blocking legitimate traffic, but it provided at least some level of protection at the application layer — a place where compliance regimes were desperate for solutions. Then PCI (Payment Card Industry) regulations got involved, and the whole landscape changed.
PCI requirement 6.6 states that you have to either have a WAF in place or do a thorough code review on every change to the application. Given the unappealing nature of the second option, most organizations read this as a mandate to get a WAF. In other words, security stakeholders weren’t installing WAFs due to their security value — they just wanted to pass their mandatory PCI certification. It’s fair to say that PCI grew the legacy WAF market from an interesting idea to the behemoth that it is today.
And the legacy WAF continues to hang around, an outdated technology propped up by legalese rather than actual utility, providing a false sense of security without doing much to ensure it. If that isn’t enough for you to show your legacy WAF the door, here are four more reasons why legacy WAFs should be replaced.
1. The minimum WAF rules approach is broken
A common side effect of a legacy WAF implementation is the blocking of legitimate traffic. In the application security business, we call these false positives. Don’t be fooled by this innocuous phrasing — what it means is that customers aren’t able to buy things they want, upload their latest vacation photos, or generally use the functionality of your applications. It’s the kind of experience that can quickly turn current customers into former customers.
To combat false positives from legacy WAFs, most companies run the bare minimum number of rules to get by.
To combat false positives from legacy WAFs, most companies run the bare minimum number of rules to get by. This means that only the most obvious and most egregious attacks are caught, while everything else just sails right past the filter. Simple rules mean easy-to-bypass rules, leaving you with an ineffective WAF.
2. Learning mode is broken at speed
One way legacy WAFs try to cut down on false positives and avoid breaking valid traffic is through a “learning mode.” In learning mode, the legacy WAF learns what normal traffic looks like versus what malicious traffic looks like and protects accordingly. A lot can be said about how tough it is to get learning mode right, even under perfect conditions, but let’s skip to the real problem we see in production environments.
Learning mode takes time. Legacy WAFs that need to learn to recognize “normal behavior” require a certain amount of traffic review before they can actively block everything else. It may take a few hours to learn safe application traffic patterns.
When you move from waterfall to agile and DevOps, deployment speed gets faster and faster. The application code changes weekly, daily or even hourly. If you deploy using anything close to a modern cadence, putting the legacy WAF in learning mode on every code change means you are always in learning mode. Essentially, any kind of learning mode, when applied to modern application development techniques, just can’t keep up with the pace of production.
3. Monitoring mode is not the same as compliance
The fact that legacy WAFs have survived due to legal mandates rather than effectiveness isn’t the only open secret in the business. You also won’t be shocked to hear that most companies never put their legacy WAF into active blocking mode. After all, blocking mode has high false positive rates and breaks your legitimate traffic, so it’s the way to go only if you’re hoping to break your application.
Instead, the legacy WAF is run in monitoring mode, where it watches traffic and logs any event as an attack. When it’s time for audit, the legacy WAF gets flipped on for a short bit, then back off again. In reality, of course, some auditors don’t even care if you have it in active blocking mode. Just having a WAF in place is good enough for them.
When it’s time for audit, the legacy WAF gets flipped on for a short bit, then back off again.
Running your WAF in monitoring mode is not an effective control. All it does is add a false sense of security and additional overhead to the security team to evaluate logged events. In this scenario, you’re spending money where it doesn’t need to be spent while adding close to zero defensive measures.
4. Traditional WAFs require costly rules tuning
Another given when operating a traditional perimeter-based WAF: get used to rules tuning to eliminate false positives — and lots of them. Most traditional WAF vendors have a way of glossing over this necessary maintenance in an effort to make the process acceptable to customers who know they need to protect their production apps and APIs. They’ll sell you a “services package,” where tuning is marketed and sold as a white-glove experience — that sounds good, but you will pay a premium for offloading the work. Other security vendors actually tell customers that tuning out false positives should be an expected part of WAF ownership!
Whether outsourced or handled internally, rules tuning a legacy WAF can incur additional cost that organizations often forget to take into account when evaluating vendors. Professional services can be hit or miss based on quality of staff and complexity of tuning — and costs can add up quickly when based on an hourly rate. Taking tuning in-house can be a viable option at first, if you have knowledgeable staff to do it, but it’s not scalable as an application’s codebase becomes more complex and the attack surface grows.
WAFs with antiquated management user interfaces built a decade or more ago can be difficult to learn and use. Additionally, WAF tuning eventually changes from a team member’s part-time responsibility into a full-time role as more apps and APIs are deployed to fuel an organization’s growth and competitiveness in the market. These additional roles are trained and hired specifically to manage the WAF, taking up headcount and department funding and resources.
The solution: a next-gen WAF designed to protect the modern web
Don’t settle for compliance alone. There are new options for adding defense at the web application layer and achieving compliance at the same time — for example, using a Next-Gen WAF (NGWAF) to provide application-aware defensive coverage. Instead of just checking a box for PCI, you can be actively defending against the OWASP Top 10, account takeovers, instrument business logic, and fight off bots, all without interfering with legitimate customer activity.