For this series focused on Quick Wins with Website Protection Services, the key is getting your sites protected quickly without breaking too much application functionality. Your public website is highly visible to both customers and staff. Most such public sites capture private information, so site integrity is important. Lastly, your organization spends a ton of money geting the latest and greatest functionality on the site, so they don’t take kindly to being told their shiny objects aren’t supported by security. All this adds up to a tightrope act to protect the website while maintaining performance, availability, and functionality. Navigating these tradeoffs is what makes security a tough job.

Planning the Deployment

The first step is to set up with your website protection service (WPS). If you are just dealing with a handful of sites and your requirements are straightforward you can probably do this yourself. You don’t have much pricing leverage so you won’t get much attention from a dedicated account team. Obviously if you do have enterprise-class requirements (and budget), you go through the sales fandango with the vendor. This involves a proof of concept, milking their technical sales resources to help set things up, and then playing one WPS provider against another for the best price, just like with everything else.

Before you are ready to move your site over (even in test mode) you have some decisions to make. Start at the beginning. You need to decide which sites need to be protected. The optimal answer is all of them, but we live in an imperfect world. You also may not know the full extent of all your website properties. With your list of high-priority sites which must be protected, you need to understand which pages & areas are good for the public and search spiders to see, and which are not. It is quite possible that everything is fair game for everybody, but you cannot afford to assume so.

Speaking of search engines and automated crawlers, you will need to figure out how to handle those inhuman visitors. One key feature described in the last post is the ability to control which bots are allowed to visit and which are not. While you are thinking about the IP ranges that can visit your site, you need to decide whether to restrict inbound network connections to only the WPS. This blocks attackers from attacking your site directly, but to take advantage of this option you will need to work with the network security team to lock it down on your firewall. These are some of the decisions you need to make before you start routing traffic to the WPS.

A level of abstraction above bots and IP addresses is users and identities. Will you restrict visitors by geography, user agent (some sites don’t allow IE6 to connect, for example), or anything else? WPS services use big data analytics (just ask them) to track details about certain IP addresses and speculate on the likely intent of visitors. Using that information you could conceivably block unwanted users from connecting in an attempt to prevent malicious activity. Kind of like Minority Report for your website. That’s all good and well, but as we learned during the early IPS days, blocking big customers causes major headaches for the security team – so be careful when pulling the trigger on this kind of controls.

That’s why we are still in the planning phase here. Once we get to testing you will be able to thoroughly understand the impact of your policies on your site.

Finally, you need to determine which of your administrators will have access to the WPS console and be able (re-)configure the service. Like any other cloud-based service, unauthorized access to the management console is usually game over. So it is essential to make sure authorizations and entitlements are properly defined and enforced. Another management decision involves who is alerted of WPS issues such as downtime and attacks – the same process you follow for your own devices. Defining handoffs and accountabilities between your team and the WPS group before you move traffic is essential.

Test (or Suffer the Consequences)

Now that you have planned out the deployment sufficiently, you need to work through testing to figure out what will break when you go live. Many WPS services claim you can be up and running in less than an hour, and that is indeed possible. But getting a site running is not exactly the same as getting it running with full functionality and security. So we always recommend a test to understand the impact of front ending your website with a WPS. You may decide any issues are more than outweighed by the security improvement from the WPS, or perhaps not. But you should be able to have an educated discussion with senior management about trade-offs before you flip the switch.

How can you test these services? Optimally you already have a staging site where you test functionality before it goes live, and you can run a full battery of QA tests through the WPS. Of course that might require the network team to temporarily add firewall rules to allow traffic to flow properly to a protected staging environment. You might also use DNS hocus pocus to route a tightly controlled slice of traffic through the WPS for testing, while the general public still connects directly to your site. Much of the testing mechanics depend on your internal web architecture. WPS providers should be able to help you map out a testing plan.

Then you get to configure the WAF rules. Some WPS have ‘learning’ capabilities, whereby they monitor site traffic during a burn-in period, and then suggest rules to protect your applications. That can get you going quickly, and this is a Quick Wins initiative so we can’t complain much. But automatically generated rules may not provide sufficient security. We favor an incremental approach, where you start with the most secure settings you can, see what breaks using the WPS, then tune accordingly.

Obviously some functions of your applications must not be impacted, so you will need to iteratively loosen WAF rules until you reach a point where critical functionality is available, with the maximum security possible. If you are deploying a WPS for a Quick Win, you shouldn’t spend too much time on tuning and burn-in – try to quickly find a simple ruleset that balances security and functionality. Over time you can optimize your WAF rules, and eventually application developers should factor the WPS into their development process. But that doesn’t happen overnight, and in the meantime you will need to trade offs some security against functionality.

Keep in mind that many WPS providers include a caching capability to improve website performance. If your site is dynamic in any way caching is likely to break something. So tuning is not only an issue for your security rules, but also for caching and performance. Of course you would like the greatest performance boost from the WPS, but you may need to disable some caching to preserve full functionality.

In case we haven’t made it clear enough, the success of your testing process is the difference between a Quick Win and the Misery of Defeat.

Abort the Mission

Speaking of defeat, there is a chance that WPS will fundamentally break your site. This doesn’t happen often but in case it does you need a rollback plan. So spend time on non-disruptive testing with the WPS provider. This enables you to test the WPS before you commit or switch. Make sure everything is sorted before you start funneling real traffic through the WPS.

Ongoing Management

For ongoing management you are likely to interact mostly with the WPS reporting functions, which provide a fairly granular look at security and traffic dynamics. Trend reports can show traffic spikes and what proportion are automated requests (bots). You may be able to identify abnormal activities or patterns which warrant further investigation.

Many WPS providers can also provide compliance reports – which provide artifacts to satisfy regulatory criteria regarding web application firewalls, network firewalls, and change control. These are critical, especially if you handle protected information. Successful audits require you to convince the assessor that you are in control, demonstrating your control through both detailed (firewall) policy reports and documentation of strong fundamental security program. Using a WPS and generating reports quickly – perhaps even walking the assessor through the interface – may help convince the assessor you know what you are doing. A little dog & pony show never hurt anyone, did it?

Over time your QA processes should factor the WPS into development and testing. That is the ultimate goal but the details vary, depending on how you develop and deploy web applications. If you take a continuous deployment approach you don’t need to do anything differently – the WPS will be in-stream, and all testing will take it into account. But if you use a staging environment and a structured update/upgrade model, you will need to run tests through the WPS before updates go live.

We shouldn’t neglect the ongoing operational functions you don’t need with a WPS. First of all, you aren’t responsible for the hygiene of the WPS computing platform. The provider is responsible for updating, patching, and managing all their equipment and software. That is all a lot of fun, so try to contain your disappointment that it’s not your problem.

The WPS provider also needs a security research capability to look for common web attacks (including the OSWASP Top 10 we mentioned earlier) to keep the WAF ruleset current. The provider is also on the hook for adding bandwidth to keep pace with escalating volume. In the age of increasingly frequent DDoS attacks, that is an expensive proposition. All these things are taken care of by the service so you don’t have to.

That wraps up our Quick Wins with Website Protection Services series. We told you it would be quick. We will assemble the paper over the next week or so, and will post it to the Research Library when it’s ready. There is still time to provide feedback and commentary, and we appreciate the feedback we get on all our work.

Share: