Securosis

Research

DDoS: It’s FUD-eriffic!

FUD can be your friend when trying to get security projects funded. But it needs to be wisely used and you only have one bullet in the proverbial chamber. The folks at Prolexic just rolled out a new white paper on using FUD to make the case internally about DDoS. The paper requires registration, so I didn’t. I know all about the FUD involved in DDoS – I don’t need these guys educating me about that. So here are some really FUD-elicious reasons why business folks need to be worried about DDoS: The damage from a DDoS attack actually goes far beyond IT and can impact: Stock price and investor confidence Sales revenues and profitability Brand reputation Customer service Employee morale Search engine rankings and more How’s that for some chicken little action? I think a DDoS may clog your toilets as well, so bring the plungers. And make sure you have psychologists on call – employee morale will be in the dumpers with every incremental 10Gbps of DDoS traffic hammering your systems. And your scrubbing center can make it all better. Just ask them. Yes, I’m being a bit facetious. OK, very facetious. I can imagine investors have no faith in the Fortune 10 banks that get hammered by DDoS every day. Man, I could go on all day… Anyhow, this one was just too juicy to let pass. Now I’ll get back to doing something productive… Share:

Share:
Read Post

Network-based Malware Detection 2.0: The Network’s Place in the Malware Lifecycle

As we resume our Network-based Malware Detection (NBMD) 2.0 series, we need to dig into the malware detection/analysis lifecycle to provide some context on where network-based malware analysis fits in, and what an NBMD device needs to integrate with to protect against advanced threats. We have already exhaustively researched the malware analysis process. The process diagram below was built as part of Malware Analysis Quant. Looking at the process, NBMD provides the analyze malware activity phase – including building the testbed, static analysis, various dynamic analysis tests, and finally packaging everything up into a malware profile. All these functions occur either on the device or in some cloud-based sandbox for analyzing malware files. That is why scalability is so important, as we discussed last time. You basically need to analyze every file that comes through because you cannot wait for an employee’s device to be compromised before starting the analysis. Some other aspects of this lifecycle bear mentioning: Ingress analysis is not enough: Detecting and blocking malware on the perimeter is a central pillar of the strategy, but no NBMD capability can be 100% accurate and catch everything. You need other controls on endpoints, supplemented with aggressive egress filtering. Intelligence drives accuracy: Malware and tactics evolve so quickly that on-device analysis techniques must evolve as well. This requires a significant and sustained investment in threat research and intelligence sharing. Before we can dig into these two points we need to point out some other relevant research on these topics for additional context. The Securosis Data Breach Triangle shows a number of opportunities to interrupt a data breach. You can either protect the data (very hard), detect and stop the exploit, or catch the data with egress filtering. Success at any one of these will stop a breach. But putting all your eggs in one basket is unwise, so work on all three. For specifics on detecting and stopping exploits, refer to our ongoing CISO’s Guide to Advanced Attackers – particularly Breaking the Kill Chain, which covers stopping an attack. Remember – even if a device is compromised, unless critical data is exfiltrated it’s not a breach. The best case is to detect the malware before it hurts anything – NBMD is very interesting technology for this – but you also need to rely heavily on your incident response process to ensure you can contain the damage. Ingress Accuracy As with most detection activities, accuracy is critical. A false positive – incorrectly flagging a file as malware – disrupts work and wastes resources investigating a malware outbreak that never happened. You need to avoid these, so put a premium on accuracy. False negatives – missing malware and letting it through – are at least as bad. So how can you verify the accuracy of an NBMD device? There is no accepted detection accuracy benchmark so you need to do some homework. Start by asking the vendor tough questions to understand their threat intelligence and threat research capabilities. Read their threat research reports and figure out whether they are on the leading edge of research, or just a fast follower using other companies’ research innovations. Malware research provides the data for malware analysis, whether on the device or in the cloud. So you need to understand the depth and breadth of a vendor’s research capability. Dig deep and understand how many researchers they have focused on malware analysis. Learn how they aggregate the millions of samples in the wild to isolate patterns using fancy terms like big data analytics. Study and understand how they turn that research into detection rules and on-device tests. You will also want to understand how the vendor shares information with the broader security research community. No one company can do it all, so you want leadership and a serious investment in research, but you also need to understand how they collaborate with other groups and what alternative data sources they leverage for analysis. For particularly advanced malware samples, do they have a process to undertake manual analysis? Be sensitive to research diversity. Many NBMD devices use the same handful of threat intelligence services to populate their devices. That makes it very difficult to get intelligence diversity to detect fast-moving advanced attacks. Make sure you check out lab tests of devices to compare accuracy. These tests are all flawed – it is just barely theoretically possible to accurately model a real-world environment using live ammunition (malware), but things would immediately change. But these tests can be helpful for an apples-to-apples device comparison. The Second Derivative As part of a proof of concept, you may also want to route your ingress traffic through 2 or 3 of these devices in monitoring mode, to test relative accuracy and scalability on real traffic. That should give you a good indication of how well the device will perform for you. Finally, leverage “The 2nd Derivative Effect (2DE)” of malware analysis. When new malware is found, profiled, and determined to be bad, there is an opportunity to inoculate all the devices in use. This involves uploading the indicators, behaviors, and rules to identify and block it to a central repository; and then distributing that intelligence back out to all devices. The network effect in action. The more devices in the network, the more likely the malware will show up somewhere to be profiled, and the better your chance of being protected before it reaches you. Not always, but it’s is as good a plan as any. It sucks to be the first company infected – you miss the attack on its way in. But everyone else in the network benefits from your misfortune. This ongoing feedback loop requires extensive automation (with clear checks and balances to reduce bad updates) to accelerate distribution of new indicators to devices in the field. Plan B (When You Are Wrong) Inevitably you will be wrong sometimes, and malware will get through your perimeter. That means you will need to rely on the other security controls in your environment. When they fail you will want to make sure you don’t get popped by the same attack

Share:
Read Post

Groupthink Kills Your Security Layers

As I continue working through my reading backlog I find interesting stuff that bears comment. When the folks over at NSS Labs attempted to poke holes in the concept of security layers I got curious. Only 3% of over 606 combinations of firewall, IPS, and Endpoint Protection (EPP) actually successfully blocked their full suite of attacks? There is only limited breach prevention available: NSS looked at 606 unique combinations of security product pairs (IPS + NGFW, IPS + IPS, etc.) and only 19 combinations (3 percent) were able to successfully detect ALL exploits used in testing. This correlation of detection failures shows that attackers can easily bypass several layers of security using only a small set of exploits. Most organizations should assume they are already breached and pair preventative technologies with both breach detection and security information and event management (SIEM) solutions. No kidding. It not novel to say that exploits work in today’s environment. Instead of just guessing at optimal combination of devices (which seems to be a value proposition NSS is pushing in the market now), what about getting a feel for the incremental effectiveness of just using a firewall. And then layering in an IPS, and finally looking at endpoint protection. Does IPS really make an incremental difference? That would be useful information – we already know it is very hard to block all exploits. NSS’s analysis of why layering isn’t as effective as you might think is interesting: groupthink. Many of these products are driven by the same research engines and intelligence sources. So if a source misses all its clients miss. Clearly a recipe for failure, so diversity is still important. Rats! Dan Geer and his monoculture stuff continue to bite us in the backside. But of course diversity adds management complexity. Usually significant complexity, so you need to balance different vendors at different control layers against the administrative overhead of effectively managing everything. And a significant percentage of attacks are successful not due to innovative exploits (of the sorts NSS tests), but because of operational failures implementing the technology, keeping platforms and products patched, and enforcing secure configurations. Photo credit: “groupthink” originally uploaded by khrawlings Share:

Share:
Read Post

Getting to Know Your Adversary

After a week of travel I am finally working through my reading list, and got around to RSnake’s awesome “Talk with a Black Hat” series. Check out Part 1, Part 2 and Part 3. He takes us behind the curtain – but instead of discussing impact, which your fraud and loss group can tell you – he documents tactics being used against us all the time. At the beginning of Part 1, RSnake tackles the ethical issues of communicating with and learning from black hats. I never saw this as an issue, but if you did, just read his explanation and get over it: I think it is incredibly important for security experts to have open dialogues with the blackhat community. It’s not at all dissimilar to police officers talking with drug dealers on a regular basis as part of their job: if you don’t know your adversary you are almost certainly doomed to failure. Right. A few interesting tidbits from Part 1, including “The whole blackhat market has moved from manual spreading to fully automated software.” And that this fellow’s motivation was pretty clear: “Money. I found it funny how watching tv and typing on my laptop would earn me a hard worker’s monthly wage in a few hours. [It was] too easy in fact.” And the lowest hanging fruit for an attack. Yup, pr0n sites. Now to discuss my personal favourite: porn sites. One reason why this is so easy: The admins don’t check to see what the adverts redirect to. Upload an ad of a well-endowed girl typing on Facebook, someone clicks, it does a drive by download again. But this is where it’s different: if you want extra details (for extortion if they’re a business man) you can use SET to get the actual Facebook details which, again, can be used in social engineering. There is similarly awesome perspective on monetizing DDoS (which clearly means it is not going away anytime soon), and that was only in Part 1. Part 2 and 3 are also great, but you should read them yourself to learn about your adversaries. And to leave you with some wisdom about mindsets: Q: What kind of people tend to want to buy access to your botnet and/or what do you think they use it for? A: Some people say governments use it, rivals in business. To be honest, I don’t care. If you pay you get a service. Simple. Simple. Yup, very simple. Photo credit: “Charles F Esolda” originally uploaded by angus mcdiarmid Share:

Share:
Read Post

Incite 6/5/2013: Working in the House

Once, years ago, I made the mistake of saying the Boss didn’t work. I got that statement shoved deep into my gullet because she works harder than I do. She just works in the house. My job is relatively easy – I can work from anywhere, with clients I enjoy, doing stuff that enjoy doing. Often it doesn’t feel like work at all. Compare that to the Boss, who has primary responsibility for the kids. That involves making sure they: get their homework done, are learning properly, have the support they need, and participate in their activities. But that’s the comparatively easy stuff and it’s not easy at all. She spends a lot more of her time managing the drama, which is ramping up for XX1 significantly as she and friends enter the tween stage. She also take very seriously her role of making sure the kids are well behaved, polite, and productive. And it shows. I’m biased, but my kids rarely do cringe-worthy stuff in public. I do have a minor hand in this stuff but she drives the ship. And why am I writing this now? No, I didn’t say anything stupid again to end up in the dog house. I just see how she’s handling her crunch time, which is getting the kids ready for camp, while making sure they see their friends before they head off for the summer, and working around a trip up North to see my Dad. Compared to crunch time the school year is a walk in the park. For those of you who don’t understand the misery of preparing for sleepaway camp, the camp sends a list of a zillion things you have to get. Clothes, towels, sheets, sporting equipment, creature comforts… the list is endless, and everything needs to have your kid’s name in it – if you want it to come back, anyway. Our situation is complicated because we have to ship the stuff to PA. Not only does she need to get everything, but everything needs to fit into two duffel bags. Over the years the intensity of crunch time has increased significantly. Four years ago she only had to deal with XX1 – that was relatively easy. Then XX1 and XX2 went to camp, but it was still manageable. But last year we had all three kids in camp, and decided to take a trip to Barcelona a month before they were due to leave, and went to Orlando for the girls to dance. It was nuts. This year she is way ahead of the game. We are two weeks out and pretty much everything is bought, labeled, and arranged. It’s really just a matter of packing the bags now. The whole operation ran like a well-oiled machine this year. Bravo! I am the first to criticize when stuff doesn’t work well, and usually the last to give credit when things work efficiently. I have already moved on to the next thing. We don’t have a 360-degree review process and we don’t pay bonuses at the end of the year in Chez Rothman. Working in our house is a thankless job. So it’s time to give credit where it’s due. But more importantly, she can now enjoy the next two weeks before the kids head off – without spending all her time buying, packing, and other stressful stuff. And I should also bank some karma points with the Boss to use the next time I do something stupid. Which should be in 3, 2, 1… –Mike Photo credit: “IT Task List” originally uploaded by Paul Gorbould Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Quick Wins with Website Protection Services Deployment and Ongoing Management Protecting the Website Are Websites Still the Path of Least Resistance? Network-based Malware Detection 2.0 Scaling NBMD Evolving NBMD Advanced Attackers Take No Prisoners Security Analytics with Big Data Use Cases Introduction Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U Your professionalism offends me… Our man in Ireland, Brian Honan, brings up a third rail of sorts regarding some kind of accreditation for security folks. He rightly points out that there is no snake oil defense. But it’s not clear whether he wants folks to go to charm school or to learn decent customer skills so the bad apples don’t reflect badly on our industry. Really? Shack responds with a resounding no, but more because he’s worried about losing the individuality of the characters who do security. I don’t think we need yet another group to teach folks to wear long sleeves if they have tattoos. Believe me, if folks are worried about getting a professional security person, I’m sure one of the big accounting firms would be happy to charge them $300/hour for a n00b to show up in a suit. And some of the best customers are the ones who have bought snake oil in the past. Presumably they learned something and know what questions to ask. – MR BYOD in the real world: For the most part, the organizations I talk with these days are generally in favor of BYOD, with programs to allow at least some use of personally owned computing devices. Primarily they support mobile phones, but they expanding more quickly than most people predicted to laptops and tablets. Network World has a nice, clear article with some examples of BYOD programs in real, large organizations. These are refreshingly practical, with a focus on basic management and a minimal footprint on the devices. We’re talking ActiveSync and passcode enforcement, not those crazy virtual/work/personal swapping modes some vendors promote. I had another discussion with some enterprise managers about BYOD today and they

Share:
Read Post

A CISO needs to be a business person? No kidding…

It amazes to me that articles like CISOs Must Engage the Board About Information Security and The Demise of the Player/Manager CISO even need to be written. If you sit in the CISO chair and this wasn’t already obvious to you, you need to find another job. Back when I launched the Pragmatic CSO in 2007 I wrote a few tips to help CSOs get their heads on straight. Here is the first one: Tip #1: You are a business person, not a security person When I first meet a CSO, one of the first things I ask is whether they consider themselves a “security professional” or a “finance/healthcare/whatever other vertical” professional. 8 out of 10 times they respond “security professional” without even thinking. I will say that it’s closer to 10 out of 10 with folks that work in larger enterprises. These folks are so specialized they figure a firewall is a firewall is a firewall and they could do it for any company. They are wrong. One of the things preached in the Pragmatic CSO is that security is not about firewalls or any technology for that matter. It’s about protecting the systems (and therefore the information assets) of the business and you can bet there is a difference between how you protect corporate assets in finance and consumer products. In fact there are lots of differences between doing security in most major industries. There are different businesses, they have different problems, they tolerate different levels of pain, and they require different funding models. So Tip #1 is pretty simple to say, very hard to do – especially if you rose up through the technical ranks. Security is not one size fits all and is not generic between different industries. Pragmatic CSO’s view themselves as business people first, security people second. To put it another way, a healthcare CSO said it best to me. When I asked him the question, his response was “I’m a healthcare IT professional that happens to do security.” That was exactly right. He spent years understanding the nuances of protecting private information and how HIPAA applies to what he does. He understood how the claims information between providers and payees is sent electronically. He got the BUSINESS and then was able to build a security strategy to protect the systems that are important to the business. I was in a meeting of CISOs earlier this year, and one topic that came up (inevitably) was managing the board. I told those folks that if they don’t have frequent contact, and a set of allies on the Audit Committee, they are cooked. It’s as simple as that. The full board doesn’t care too much about security, but the audit committee needs to. So build those relationships and make sure you can pick up the phone and tell them what they need to know. Or dust off your resume. You will be needing it in the short term. Share:

Share:
Read Post

Security Surrender

Last week there was a #secchat on security burnout. Again. Yeah, it’s a bit like groundhog day – we keep having the same conversation over and over again. Nothing changes. And not much will change. Security is not going to become the belle of the ball. That is not our job. It’s not our lot in life. If you want public accolades become a salesperson or factory manager or developer of cool applications. Something that adds perceived value to the business. Security ain’t it. Remaining in security means if you succeed at your job you will remain in the background. It’s Bizarro World, and you need to be okay with that. Attention whores just don’t last as security folks. When security gets attention it’s a bad day. That said, security is harder to practice in some places than others. The issues were pretty well summed up by Tony on his Pivots n Divots blog, where he announced he is moving on from being an internal security guy to become a consultant. Tony has a great list of things that just suck about being a security professional, which you have all likely experienced. Just check out the first couple which should knock the wind out of you. Compliance-driven Security Programs that hire crappy auditors that don’t look very hard Buying down risk with blinky lights – otherwise known as “throw money at the problem” Ouch! And he has 9 more similarly true problems, including the killer: “Information Security buried under too many levels of management – No seat at the Executive or VIP level.” It’s hard to succeed under those circumstances – but you already knew that. So Tony is packing it in and becoming a consultant. That will get him out of the firing line, and hopefully back to the stuff he likes about security. He wraps up with a pretty good explanation of a fundamental issue with doing security: “The problem is we care. When things don’t improve or they are just too painful we start feeling burnt out. Thankfully everywhere I’ve worked has been willing to make some forward progress. I guess I should feel thankful. But it’s too slow. It’s too broken. It’s too painful. And I care too much.” Good luck, man. I hope it works out for you. Unfortunately many folks discover the grass isn’t really greener; now Tony will have to deal with many of the same issues with even less empowerment, murkier success criteria, and the same whack jobs calling the shots. Or not calling the shots. And the 4-5 days/week on the road is much fun. Hmmm, maybe Starbucks is hiring… Photo credit: “(179/365) white flag of surrender” originally uploaded by nanny snowflake Share:

Share:
Read Post

Finally! Lack of Security = Loss of Business

For years security folks have been frustrated when trying to show real revenue impact for security. We used the TJX branding issue for years, but it didn’t really impact their stock or business much at all. Heartland Payment Systems is probably stronger now because of their breach. You can check out all the breach databases, and it’s hard to see how security has really impacted businesses. Is it a pain in the butt? Absolutely. Does cleanup cost money? That’s clear. But with the exception of CardSystems, business just don’t go away because of security issues. Or compliance issues for that matter. Which is why we continue to struggle to get budget for security projects. Maybe that’s changing a little with word that BT decided to dump Yahoo! Mail from its consumer offering because it’s a steaming pile of security FAIL. Could this be the dawn of a new age, where security matters? Where you don’t have to play state-sponsored hacking FUD games to get anything done. Could it be? Probably not. This, folks, is likely to be another red herring for security folks to chase. Let’s consider the real impact to a company like Yahoo. Do they really care? I’m not sure – they lost the consumer email game a long ago. With all their efforts around mobile and innovation, consumer email just doesn’t look like a major focus, so the lack of new features and unwillingness to address security issues kind of make sense. Sure, they will lose some traffic the captive BT portal offered as part of the service, but how material is that in light of Yahoo’s changing focus? Not enough to actually fix the security issues, which would likely require a fundamental rebuild/re-architecture of the email system. Yeah, not going to happen. Anyone working for a technology company has probably lived through this movie before. You don’t want to outright kill a product, because some customers continue to send money, and it’s high-margin because you don’t need to invest in continued development. So is Marissa Meyer losing sleep over this latest security-oriented black eye? Yeah, probably not. So where are we? Oh yeah, back to Square 1. Carry on. Photo credit: “Dump” originally uploaded by Travis Share:

Share:
Read Post

Network-based Malware Detection 2.0: Scaling NBMD

It is time to return to our Network-based Malware Detection (NBMD) 2.0 series. We have already covered how the attack space has changed over the past 18 months and how you can detect malware on the network. Let’s turn our attention to another challenge for this quickly evolving technology: scalability. Much of the scaling problem has to do with the increasing sophistication of attackers and their tools. Even unsophisticated attackers can buy sophisticated malware on the Internet. There is a well-developed market for packaged malware and suppliers are capitalizing on it. Market-based economies are a two-edged sword. And that doesn’t even factor in advanced attackers, who routinely discover and weaponize 0-day attacks to gain footholds in victim networks. All together, this makes scalability a top requirement for a network-based malware detection. So why is it hard to scale up? There are a few issues: Operating systems: Unless you have a homogenous operating system environment you need to test each malware sample against numerous vulnerable operating systems. The one-to-many testing requirement means that every malware sample requires 3-4 (or more) virtual machines, running different operating systems, to adequately test the file. VM awareness: Even better, attackers now check whether their malware is executing within a virtual machine. If so the malware either goes dormant or waits a couple hours, in hopes it will be cleared through the testbed and onto vulnerable equipment before it starts executing for real. So to fully test malware the sandbox needs to let it cook for a while. So you to spin up multiple VMs and need to let them run for a while – very resource intensive. Network impact: Analyzing malware isn’t just about determining a file is malicious. You also need to understand how it uses the network to connect to command and control infrastructure and perform internal reconnaissance to detect lateral movement. That requires watching the network stack on every VM and parsing network traffic patterns. Analyze everything: You can’t restrict your heavy analysis to only files that look obviously bad based on simple file characteristics. With the advanced obfuscation techniques in use today you need to analyze all unknown files. Given the number of files entering a typical enterprise network daily, you can see how the analysis requirements scale up quickly. As you can see the computing requirements to fully test inbound files are substantial and growing exponentially. Of course many people choose to reduce their analysis. You could certainly make a risk-based decision not even to try detecting VM-aware malware, and just pass or block each file instantly. You might decide not to analyze documents or spreadsheets for macros. You may not worry about the network characteristics of malware. These are all legitimate choices to help network-based malware detection scale without a lot more iron. But each compromise weakens your ability to detect malware. Everything comes back to risk management and tradeoffs. But, for what it’s worth, we recommend not skipping malware tests. Scaling the Malware Analysis Mountain Historically the answer to most scaling problems has been to add computing power – generally more and/or bigger boxes. The vendors selling boxes love that answer, of course. Enterprise customers not as much. Scaling malware detection hardware raises two significant issues. First is cost. We aren’t just referring to the cost of the product – each box requires a threat update subscription and maintenance. Second is the additional operational cost of managing more devices. Setting and maintaining policies on multiple boxes can be challenging; ensuring the device is operational, properly configured, and patched is more overhead. You need to keep each device within the farm up to date. New malware indicators appear pretty much daily and need to be loaded onto each device to remain current. We have seen this movie before. There was a time when organizations ran anti-spam devices within their own networks using enterprise-class (expensive) equipment. When the volume of spam mushroomed enterprises needed to add devices to analyze all the inbound mail and keep it flowing. This was great for vendors but made customers cranky. The similarities to network-based malware detection are clear. We won’t keep you in suspense – the anti-spam story ends in the cloud. Organizations realized they could make scaling someone else’s problem by using a managed email security service. So they did, en masse. This shifted the onus on providers to keep up with the flood of spam, and to keep devices operational and current. We expect a similar end to the NBMD game. We understand that many organizations have already committed to on-premise devices. If you are one of them you need to figure out how to scale your existing infrastructure. This requires central management from your vendor and a clear operational process for updating devices daily. At this point customer premise NBMD devices are mature enough to have decent central management capabilities, allowing you to configure policies and deploy updates throughout the enterprise. Keeping devices up to date requires a strong operational process. Some vendors offer the ability to have each device phone home to automatically download updates. Or you could use a central management console to update all devices. Either way you will want some human oversight of policy updates because most organizations remain uncomfortable with having policies and other device configurations managed and changed by a vendor or service provider. With good reason – it doesn’t happen often but bundled endpoint protection signature updates can brick devices. Bad network infrastructure updates don’t brick devices, but how useful is an endpoint without network access? As we mentioned earlier, we expect organizations to increasingly consider and choose cloud-based analysis, in tandem with an on-premise enforcement device for collection and blocking. This shifts responsibility for scaling and updating onto the provider. That said, accountability cannot be outsourced, so you need to ensure both detection accuracy (next post) and reasonable sample analysis turnaround times. Make sure to build this oversight into your processes. Another benefit of the cloud-based approach is the ability to share intelligence

Share:
Read Post

Quick Wins with Website Protection Services: Deployment and Ongoing Management

For this series focused on Quick Wins with Website Protection Services, the key is getting your sites protected quickly without breaking too much application functionality. Your public website is highly visible to both customers and staff. Most such public sites capture private information, so site integrity is important. Lastly, your organization spends a ton of money geting the latest and greatest functionality on the site, so they don’t take kindly to being told their shiny objects aren’t supported by security. All this adds up to a tightrope act to protect the website while maintaining performance, availability, and functionality. Navigating these tradeoffs is what makes security a tough job. Planning the Deployment The first step is to set up with your website protection service (WPS). If you are just dealing with a handful of sites and your requirements are straightforward you can probably do this yourself. You don’t have much pricing leverage so you won’t get much attention from a dedicated account team. Obviously if you do have enterprise-class requirements (and budget), you go through the sales fandango with the vendor. This involves a proof of concept, milking their technical sales resources to help set things up, and then playing one WPS provider against another for the best price, just like with everything else. Before you are ready to move your site over (even in test mode) you have some decisions to make. Start at the beginning. You need to decide which sites need to be protected. The optimal answer is all of them, but we live in an imperfect world. You also may not know the full extent of all your website properties. With your list of high-priority sites which must be protected, you need to understand which pages & areas are good for the public and search spiders to see, and which are not. It is quite possible that everything is fair game for everybody, but you cannot afford to assume so. Speaking of search engines and automated crawlers, you will need to figure out how to handle those inhuman visitors. One key feature described in the last post is the ability to control which bots are allowed to visit and which are not. While you are thinking about the IP ranges that can visit your site, you need to decide whether to restrict inbound network connections to only the WPS. This blocks attackers from attacking your site directly, but to take advantage of this option you will need to work with the network security team to lock it down on your firewall. These are some of the decisions you need to make before you start routing traffic to the WPS. A level of abstraction above bots and IP addresses is users and identities. Will you restrict visitors by geography, user agent (some sites don’t allow IE6 to connect, for example), or anything else? WPS services use big data analytics (just ask them) to track details about certain IP addresses and speculate on the likely intent of visitors. Using that information you could conceivably block unwanted users from connecting in an attempt to prevent malicious activity. Kind of like Minority Report for your website. That’s all good and well, but as we learned during the early IPS days, blocking big customers causes major headaches for the security team – so be careful when pulling the trigger on this kind of controls. That’s why we are still in the planning phase here. Once we get to testing you will be able to thoroughly understand the impact of your policies on your site. Finally, you need to determine which of your administrators will have access to the WPS console and be able (re-)configure the service. Like any other cloud-based service, unauthorized access to the management console is usually game over. So it is essential to make sure authorizations and entitlements are properly defined and enforced. Another management decision involves who is alerted of WPS issues such as downtime and attacks – the same process you follow for your own devices. Defining handoffs and accountabilities between your team and the WPS group before you move traffic is essential. Test (or Suffer the Consequences) Now that you have planned out the deployment sufficiently, you need to work through testing to figure out what will break when you go live. Many WPS services claim you can be up and running in less than an hour, and that is indeed possible. But getting a site running is not exactly the same as getting it running with full functionality and security. So we always recommend a test to understand the impact of front ending your website with a WPS. You may decide any issues are more than outweighed by the security improvement from the WPS, or perhaps not. But you should be able to have an educated discussion with senior management about trade-offs before you flip the switch. How can you test these services? Optimally you already have a staging site where you test functionality before it goes live, and you can run a full battery of QA tests through the WPS. Of course that might require the network team to temporarily add firewall rules to allow traffic to flow properly to a protected staging environment. You might also use DNS hocus pocus to route a tightly controlled slice of traffic through the WPS for testing, while the general public still connects directly to your site. Much of the testing mechanics depend on your internal web architecture. WPS providers should be able to help you map out a testing plan. Then you get to configure the WAF rules. Some WPS have ‘learning’ capabilities, whereby they monitor site traffic during a burn-in period, and then suggest rules to protect your applications. That can get you going quickly, and this is a Quick Wins initiative so we can’t complain much. But automatically generated rules may not provide sufficient security. We favor an incremental approach, where you start with the most secure settings you can, see what breaks using the WPS, then tune accordingly. Obviously some functions of your applications must not be impacted, so

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.