Web Application Firewalls and the Future of Website Security

website securityWeb application firewalls have been around for over 20 years, but recent advancements in how they block bad traffic and are managed by development teams encouraged us to take a look at the history of firewalls, WAFs, and where website security is heading.

What is a Firewall

Computing firewalls were first developed while the Internet was still in its infancy in the 1980s. Their purpose was to act as a virtual shield between internal networks and servers and external networks (such as the Internet), so that traffic between the two could be monitored and blocked if it was deemed to be suspicious based on preset rules.

The first iteration of firewalls looked at individual data packets to determine where each packet came from and if it matched rules that said it could pass through into the network. Subsequent firewalls inspect packets based on their state in a connection (ie, if it is part of an ongoing stream of data, the start/end of a data stream or if it does not relate to other packets), and future iterations of firewalls moved into the application layer.

By contrast with firewalls, Web Application Firewalls or WAFs inspect HTTP traffic going to specific web applications, rather than traffic between servers. WAFs were first deployed in data centers, but are now often deployed in the cloud as a reverse proxy. This means the WAF is placed between a website’s origin server and a visitor’s browser, and acts as a proxy for the website origin server so that it can inspect traffic and either block it or pass it through to the origin.

web application firewall

Types of Firewall and Web Application Firewalls

The first WAFs were developed in the 1990s and the open-source WAF ModSecurity was first released in 2002 and is still in high use today. In addition to being a popular tool that web applications can setup and deploy themselves, ModSecurity serves as the backbone for many of the WAFs developed by Content Delivery Networks including CloudFlare and Akamai.

WAFs aim to protect against web application-specific attacks including Cross Site Scripting, SQL Injections, Cookie Poisoning, known platform vulnerabilities and more. They prevent websites and apps from unknowingly letting hackers into their system or sharing user data.

The majority of WAFs do this by employing a set of rules and using those rules to inspect traffic before it is let through to the website origin. ModSecurity and many other WAFs base their initial rulesets off of the Open Web Application Security Project (OWASP) Top 10 list, which has published a list of the top website attacks since 2003. The current list can be viewed here. Below we go into how these rules-based WAFs work and what other solutions have arisen in the WAF marketplace in recent years.

How Firewall Rules Block Threats

Rules-based WAFs deployed as reverse proxies inspect all traffic that attempts to connect to a website’s origin server against a list of rules and either blocks the traffic or lets it through. Whether these rules block or allow traffic in by default depends on if the WAF is set up with a negative model or positive model.

Negative and Positive Security Models

Negative security models, which have traditionally been the default WAF configuration, allow all traffic except that which meets rules showing it is a known threat. This configuration protects legitimate traffic from being incorrectly labelled as an attack, but also requires the use of a large database of rules and signatures of attack types to scan against. This type of configuration is a good solution for those looking to block known attack types and system vulnerabilities with less setup, as many WAFs come with automatic deployment of the OWASP Top 10 ruleset along with other databases of rulesets.

A positive security model takes a different approach by blocking all incoming traffic unless it meets requirements that show it is not a malicious entity - for example location or browser type. This type of security requires less rules because it blocks traffic by default, but also requires websites to have an intimate knowledge of their visitor profile so that legitimate users are not blocked.

Negative security rulesets operate on the guideline that the majority of attackers are using known vulnerabilities to exploit websites that do not have protection. While this may be the case, these WAFs require constant maintaining to ensure the new attack types discovered each day are included in that WAF instance’s ruleset. If a WAF is irregularly updated to include new attack types or vulnerability patches, that website becomes just as vulnerable to new attack types as it would be without a WAF.

Positive security models more strictly limit the routes an attacker can take to gain access to a website, and because of this block against both known and unknown attack types and vulnerabilities. When first deployed, positive models may block a good amount of legitimate traffic in so-called “false positive” events, however by whitelisting visitor characteristics over a few rounds of rules editing the positive model will (when configured correctly) successfully allow in real visitors while blocking a wider range of malicious activity than negative models.

Managing Rules for ModSecurity and other WAFs

Both negative and positive model WAFs are based on using rules to either allow or deny visitors entry to a website. WAFs have operated on this rules-based approach since their inception, but more modern solutions have recently started to question this method due to some downsides of this approach.

Rules-based WAFs can be inexpensive, relatively easy to install, and if updated and monitored regularly will block most attacks. Companies such as CloudFlare offer WAF services with protection against the OWASP Top 10 attacks starting at just $20/month, and provide additional rulesets at an upcharge. These basic WAFs are an attractive option for those looking for protection against some attacks at a low cost, but the true cost comes in the developer time it takes to maintain the rules that form the core of the protection.

Taking ModSecurity as an example, here is how a typical deployment will go:

  1. ModSecurity is first used in “detect” mode, which tracks traffic and creates log entries but does not block threats.
  2. A developer will then review the logs from the detection stage to see if rules need to be added or adjusted.
  3. The WAF can be turned to blocking mode to start actively looking for and blocking traffic that meets the rulesets.
  4. As new vulnerabilities with platforms including WordPress, Magento and Drupal are discovered, a developer or team needs to update rules to capture these vulnerabilities.
  5. The security team will also need to create custom rules based on their specific threats - for example, ecommerce sites may see a large number of price scraping bots which need to be blocked, and some sites will see attacks from specific countries with which they are not doing business.

Over time, many WAFs include thousands of rules that need to be constantly managed - this can cause cost to businesses through solutions that charge for adding additional rules, and significant labor cost. There has also been a history of these tools being deployed in detect mode and never moving to active blocking due to the difficulty of accurately identifying attacks based on rules and the fear of impacting legitimate traffic and hurting revenue as a result.

Next Generation Firewalls

The number of security threats is growing each year - in 2015, the number of incidents reported was 48% higher than in 2014 as reported by PWC global. Security threats are becoming more advanced, taking multiple routes to get into a website and specifically avoiding those which hackers know may be protected by a rules-based WAF. In addition, as the number and type of devices accessing the Internet becomes larger and more varied, attackers are given more pathways into a network. As ReadWrite puts it, hackers are now “calculated criminals focused on acquiring information in a data-laden marketplace.”

All of this has led to a need for more sophisticated methods of website security and an uptick in intelligent or context-based security solutions. These WAFs, firewalls, and bot blocking tools do not rely on rulesets to block attacks and instead use complex systems to identify threats based on the combined actions they take against a website. By removing the rulesets that traditional solutions use, modern systems can stay one step ahead of attackers as they do not know what rules are being used against them.

These “intelligent” or “learning” security solutions use contextual information such as location, device, time, and on-site behavior to built a complete profile of website visitors that allows them to either block or allow them in. Advanced techniques allow WAFs and bot-blocking tools to capture the attacks that are not found by rules-based solutions while still protecting legitimate traffic.

An example of this advanced solution in bot blocking is Perimeter X. bot blockingBot blocking is crucial especially for ecommerce websites who are often victim to price-scraping bots and bots that hold up inventory in checkout carts. Perimeter X blocks bots while protecting real shoppers by giving each visitor a “Risk Score.” This score is based on behavioral analysis that includes factors such as mouse and click movement and timing, unusual web application requests, and hidden clicks. These techniques are able to defend against even the most sophisticated bots that use real browsers to take over accounts and can slip past older security methods.

On the Web Application FireWall side, solutions including SignalSciences and Threat X look at the combined activity of potentially damaging traffic to determine if an IP address is in the early stages of an attack or collecting information in the background before starting an attack.

Threat X tracks an attacker across seven stages of attack, to determine when and where hackers need to be stopped. While some activity may immediately set off blocking triggers, other activity is logged and watched in case it progresses. Using this system in place of a purely rules-based system means that hackers are stopped while real visitors who initially may look like or share attributes with hackers are not impacted.

attack stages

One reason the security industry is moving away from making simple yes/no decisions on traffic is the increased complexity of real user behavior that might at first look like malicious behavior: SignalSciences mentions that a user entering a name such as O’Toole in a form may be blocked in a traditional WAF as ‘ is a characteristic of a SQL attack. The SignalSciences solution aims to eliminate an attacker’s ability to use scripting to gain access to a website while reducing false positives by taking into account where information has been loaded (for example on a form fill vs a browser script) along with other factors.

Despite these advancements, it hasn’t always been smooth sailing in terms of new security solutions: Early examples of learning-based security such as a WAF built by CloudFlare didn’t perform as expected because their results were different from those found with traditional rules based approaches. Intelligent solutions still need to gather contextual data around what “normal” traffic for a website looks like before programs are able to identify out-of-the-ordinary requests.

However, the benefit of these solutions as opposed to rules-based systems is that data is examined and stored by a machine, so developers do not need to manually inspect traffic and decide what is expected or unexpected behavior on their specific website.

Integrating Agile and DevOps methods into Security

It’s clear the website security landscape is expanding as new techniques to combat malicious actors are regularly created. At the same time, modern development practices such as agile and DevOps are on the rise, and security companies are increasingly thinking about how to integrate these methods into their offerings. One key component of a DevOps workflow is the use of in-depth metrics and logs to continually assess and tune website configurations - and this is no different in the security space. Logs are essential for developers to understand the traffic coming through their site, what actions users are taking, and how security rules or learning systems are affecting that traffic.

security dashboard

SignalSciences is one solution dedicated to improving this feedback loop by giving developers a clear dashboard with threat information in real time. Section is also committed to integrating with agile and DevOps principles by providing a local developer environment so engineers can see how their WAF and caching setup will impact the production website before going live.

Section also gives all users ELK stack logs for inspecting detailed traffic and Graphite metrics visualized in Grafana for customizable graphs. Threat X believes that both developers and business people should be able to easily view threats and actions taken against threats, which is why their dashboards are consumable by even non-technical personnel.

When next-generation solutions are combined with DevOps workflows, the security of websites and the ability for developers to immediately address threats increase dramatically. To learn more about the security solutions Section offers, contact us.

Get Started Today

Similar Articles