Securosis

Research

Fact-Based Network Security: In Action

As we wrap up our series on Fact-Based Network Security, let’s run through a simple scenario to illustrate the concepts. Remember, the idea is to figure out what on the list will provide the biggest impact for your organization, and then do it. We make trade-offs every day. Some things get done, others don’t. That’s the reality for everyone, so don’t feel bad that you can’t get everything done. Ever. But the difference between a successful security practitioner, and someone looking for a job, is that success is about consistently choosing the right things to get done. Some folks intuitively know what’s important and seem to focus on those things. They exist – I’ve met them. They are rock stars, but when you try to analyze what they do, there isn’t really a pattern. They just know. Sorry, but you probably aren’t one of those folks. So you need a system – you know, a replicable process – to make those decisions. You may not have finely tuned intuition, but you can overcome that by consistently and somewhat ruthlessly getting the most important things done. Scenario: WidgetCo and the Persistent Attacker In our little story, you work for a manufacturer and your company makes widgets. They are valuable widgets, and represent intellectual property that most nations of the world (friend and foe alike) would love to get their hands on. So you know that your organization is a target. Your management gets it – they have a well-segmented network, with firewalls blocking access to the perimeter and another series of enclaves protecting R&D and other sensitive areas. You have IPS on those sensitive segments, as well as some full packet capture gear. Yes, you have a SIEM as well, but you are revisiting that selection. That’s another story for another day. Your users are reasonably sophisticated, but human. You run the security operations team, meaning that your folks do most of the management and configuration of security devices. Knowing that you are a target means you need to assume attackers have compromised your network. But your tight egress filtering hasn’t shown any significant exfiltration. Your team’s task list seems infinite. There are a myriad of ports to open and close on the firewalls to support collaboration with specific business partners. Your company’s sales team needs access to a new logistical application so they can update customers on their shipments of widgets. And of course you are a large customer of a certain flavor of two-factor authentication token for all those reps. Your boss lights up your phone almost daily because she gets a lot of pressure to support those business partners. Your VP of Engineering is doing some cool stuff with a pretty famous research institution in the Northeast. The sales guys are on-site and don’t know what to tell the customer. And your egress filters just blocked an outbound attempt coming from the finance network, maybe due to the 2FA breach. What do you do? No one likes to be told no, but you can’t get everything done. How do you choose? Get back to the risks If you think back to how we define risk, it’s pretty straightforward. Which assets are most important? Clearly it’s the R&D information, which you know is the target of persistent attackers. Sure, customer information is important (to them) and finance information would make some hedge fund manager another billion or two, but it would be bad if the designs for the next-generation widget ended up in the hands of a certain nation-state. And when you think about the outcomes that are important to your business, protecting the company’s IP is the first and highest priority. It supports your billion-dollar valuation, and senior management doesn’t like to screw around with it. Thinking about the metrics that underlie various outcomes, you need to focus on indicators of compromise on those most sensitive networks. So gather configuration data and monitor the logs of those servers. Just to be sure (and to be ready if something goes south) you’ll also capture traffic on those networks, so you can React Faster and Better if and when an alert fires. It’s also a good idea to pay attention to the network topology and monitor for potential exposures, usually opened by a faulty firewall change or some other change error. Your operational system gathers this data on an ongoing basis, so when alerts fire you can jump into action. Saying No In our scenario, the R&D networks are most critical, pure and simple. So you task your operations team to provide access to the research institution as the top priority. Of course, not full unfettered access, but access to a new enclave where the researchers will collaborate. After your team makes the changes, you do a regression analysis, to make sure you didn’t open up any holes, using your network security configuration management tool. No alerts fired and the report came back clean. So you are done at that point, right? We don’t think so. Given the importance of this network, you keep a subset of the ops team with their eyes on the monitors collecting server logs, IDS, and full packet capture data. You have also tightened the egress filters just in case. Sure some folks get grumpy when they are blocked, but you can’t take any chances. Without a baseline of the new traffic dynamics, and without a better feel for the log data, it’s hard to know what is normal and what could be a problem. Admittedly this decision makes the VP of Sales unhappy because his folks can’t get access to the logistical information. They’re forced to have a support team in HQ pull a report and email it to the reps’ devices. It’s horribly inefficient, as the VP keeps telling you. But that’s not all. You also haven’t been able to fully investigate the potential issue on the financial network, although you did install a full packet capture device on that network to start

Share:
Read Post

Recently on the Heavy Feed

Since we post most of the content for our blog series on the Heavy Feed (get it via the web or RSS), every so often we like to post links to our latest missives on the main feed. Within the next 10 days we’ll be wrapping both our Fact-based Network Security and Security Management 2.0 series. As always, we love feedback, discussion, dissension and the occasional troll to add comments, so fire away. We look forward to your participation. Fact-based Network Security Metrics and the Pursuit of Prioritization Defining ‘Risk’ Outcomes and Operational Data Operationalizing the Facts Compliance Benefits Security Management 2.0: Is it time to replace your SIEM? Time to Replace Your SIEM? (new series) Platform Evolution Revisiting Requirements Platform Evaluation, Part 1 Platform Evaluation, Part 2 Vendor Evaluation – Culling the Short List Vendor Evaluation – Driving the PoC Share:

Share:
Read Post

Security Management 2.0: Making the Decision

It’s time – you are ready. You have done the work, including revisiting your requirements, evaluating your current platform in terms of your current and emerging requirements, assessing new vendors/platforms to develop a short list and run a comprehensive proof of concept. Now it’s time to make the call. We know this is an important decision – we are here because your first attempt at this project wasn’t as successful as it needed to be. So let’s break down the decision to ensure you can make a good recommendation and feel comfortable with it. That’s actually a good point to discuss. The output of our Security Management 2.0 process is not really a decision – it’s more of a recommendation. That’s the reality – the final decision will likely be made in the executive suite. That’s why we have focused so much on gathering data (quantitative where possible) – you will need to defend your recommendation until the purchase order is signed. And probably more afterwards. We won’t mince words. This decision generally isn’t about the facts – especially since there is an incumbent in play, which is likely part of a big company that may have important relationships with heavies in your shop. So you need your ducks in a row and a compelling argument for any change. But that’s still only part of the decision process. In many cases, the (perceived) failure of your existing SIEM is self-inflicted. So we also need to evaluate and explain the causes of the failed project, with assurance that they will be addressed and avoided this time. If not, your successor will be in the same boat in another 2-3 years. So before you put your neck on the chopping block and advocate for a change (if that’s what you decide), do some deep internal analysis as well. Introspection The first thing is to make sure you really re-examined the existing platform in terms of the original goals. Did your original goals adequately map your needs at the time, or was there stuff you did not expect? How have your goals changed over time? Be honest! This is not the time to let your ego get in the way of doing what’s right, and you need a hard and fresh look at the decision to ensure you don’t repeat previous mistakes. Did you kick off this process because you were pissed at the original vendor? Or because they got bought and seemed to forget about the platform? Do you know what it will take to get the incumbent to where it needs to be – or whether that is even possible? Is it about throwing professional services at the issues? Is there a fundamental technology problem? Remember, there are no right or wrong answers here, but the truth will become clear when you need to sell this to management. Some of you may be worried that management will look at the need for replacement as ‘your fault’ for choosing the incumbent, so make sure you have answers to these questions and that you aren’t falling into a self-delusion trap. You need your story straight and your motivations clear. Did you assess the issues critically the first time around? If it was a skills issue, have you addressed it? Can your folks build and maintain the platform moving forward? Or are you looking at a managed service to take that concern off the table? If it was a resource problem, do you now have enough staff for proper care and feeding? Yes, the new generation of platforms requires less expertise to keep operational, but don’t be naive – no matter what any sales rep says, you cannot simply set and forget them. Whatever you pick will require expertise to deploy, manage, tune, and analyze reports. These platforms are not self-aware – not by a long shot. The last thing you want to do is set yourself up for failure, so make sure you ask the right questions ahead of time and be honest about the answers. Expectations The next main aspect of the decision is reconciling your expectations with reality. Revisiting requirements provides information on what you need the security management platform to do. You should be able to prioritize the specific use cases (compliance, security, forensics, operations), and have a pretty good feeling about whether the new platform or incumbent will be able to meet your expectations. Remember, not everything is Priority #1, so pick your top three must-have items, and prioritize the requirements. If you are enamored with some new features of the challenger(s), will your organization be able to leverage them? Firing off alerts faster may not be helpful if your team takes a week to investigate any issues, or cannot keep up with the increased demand. The new platform’s ability to look at application and database traffic doesn’t matter if the developers won’t help you understand normal behavior to build the rule set. Fancy network flow analysis can be a productivity sink if your DNS and directory infrastructure is a mess and you can’t reliably map IP to user ID. Or does your existing product have too many features? Yes, it does happen that some organizations simply cannot take advantage of (or even handle) complex multi-variate correlation across the enterprise. Do you need to aggregate logs because organizational politics, or your team’s resources or skill set, prevent you from getting the job done? This might be a good reason to outsource or use a managed service. There isn’t a right or a wrong answer here, only the answer. And not being honest about that answer will land you in the hotseat again. If you kickstarted this effort because the existing product missed something and it resulted in a breach, can you honestly say the new thing would (not ‘might’) detect that attack? We have certainly seen high profile breaches result in tossing the old and bringing in the new (someone has to pay, after all), but make sure you

Share:
Read Post

Friday Summary: September 9, 2011

I suppose that, all things considered, I’m a pretty nice guy. I tip well, stop my car so people can cross the street, and always put my laptop bag under the seat in front of me, instead of taking up valuable overhead luggage space. While I have had plenty of jobs that required the use of physical force over the years, I always made sure to keep my professional detachment and use the minimum amount necessary. (Okay, that’s to keep my ass out of jail as much as anything else, but still…). And animals? I’m a total sucker for them. I don’t mean in an inappropriate way, but I think they are just so darn cute. We even donate a bunch to local shelters and the Phoenix Zoo. Heck, all our cats are basically rescues… one of which randomly showed up in a relative’s yard during a BBQ, severely injured, and which we nursed back to health and kept. Which is why my current murderous rampage against the birds crapping on our patio is completely out of character. We like birds. We even used to fill a bird feeder in the yard. Then all our trees grew out, and it seems we have the best shade in the neighborhood. On any given day, once the temperature tops 100 or so, our back patio is covered with dozens of birds doing nothing more than standing in the shade and crapping. And you know what birds eat, don’t you? Berries. Lots and lots of berries. Think they digest it all? Think again. Our patio is stained so badly we will never be able to get it clean. How do I know? I paid someone to power spray and hand scrub it with the kinds of chemicals banned from Fukushima – all to no avail. Not even with the special stuff I smuggled across the border from Mexico. They’ve even hit my grill. The bastards. I’ve tried all sorts of things to keep them away, but I suspect I’ll need to build out something using an Arduino and chainsaw by next summer. This year is a loss – 2 weeks after the big cleaning, even with me spraying it down every few days, out patio is unusable. I haven’t killed them yet. To be honest I don’t think that will work – more likely it would just land me on the local news. But I do grill a lot more chicken and turkey out there. Oh yeah, smell the sweet smell of superior birds roasting in agony. Hey… did you hear some dudes named DigiNotar got hacked? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR article on DAM. Adrian quoted on dangers to law enforcement from the recent hack. My Spanish is good, no? Adrian’s DR article on Fraud Detection and DAM. Favorite Securosis Posts Adrian Lane: Security Management 2.0: Vendor Evaluation. Mike’s pushing the envelope here, but this is the only way to figure out how the product really works. Mike Rothman & David Mortman: Data Security Lifecycle 2.0. With this cloud stuff, our underlying computing foundation is changing. This post assembles a lot of the latest and greatest about how to protect the data. Other Securosis Posts Speaking at OWASP: September 22 and 23. Incite 9/7/2011: Decisions, Decisions. Security Management 2.0: Vendor Evaluation – Culling the Short List. The New Path of Least Resistance. Making Bets. Favorite Outside Posts Gunnar: Do we know how to make software? David Mortman: Quick Blip: Hoff In The Cube at VMworld 2011: On VMware Security. Mike Rothman: The Good, Bad, and Ugly of Technical Acquisitions. Not sure what Amrit is doing now, besides writing great summaries of what happens when Big Company X buys small start-up Y. Adrian Lane: Don’t Hate The ‘Playas’ – Hate The Game. My fav this week is Mike’s Dark Reading post – it gets to the heart of the issue. Pepper: Protecting a Laptop from Simple and Sophisticated Attacks. Mike clearly thought hard about risks, and took some very unusual steps to protect them as well as he could manage. Rich: OS X won’t let you properly remove bad DigiNotar certificates. I know I need to write this up, but being sick has gotten in the way. Apple really needs to address this – for PR reasons as much as for user security. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Top News and Posts Copyright Troll Righthaven Goes on Life Support. Die, troll, die! Star Wars Fans Get Pwned. Fraudulent Google credential found in the wild. Evidence of Infected SCADA Systems Washes Up in Support Forums. VMware: The Console Blog: VMware Acquires PacketMotion. Don Norman: Google doesn’t get people, it sells them. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Russ, in response to Incite 9/7/2011: Decisions, Decisions. Re Please Stop! Dear Adrian, While I believe one of the useful roles Securosis can play in the industry is to help turn down the hype on over-blown issues, in this particular case I’m not sure I agree with your conclusion. I spent a career in aviation safety, and found that what the average line pilot was talking about every day had nowhere near the amount of aviation safety content we as aviation safety advocates thought to be adequate (an example would be the extraneous cockpit conversation prior to the Colgan Air Flight 3407 crash in Buffalo). Could it be that the fact APTs is not brought up in your daily conversations with firms could be an indication of how far we have to go in creating a

Share:
Read Post

Security Management 2.0: Vendor Evaluation – Driving the PoC

As we discussed in the last post, when considering new security management platforms, it’s critical to cull your short list based on your requirements, and to then move into the next step of the evaluation process – the Proof of Concept (PoC). Our PoC process is somewhat controversial – mostly because vendors hate it. Why? Because it’s about you and your needs, not them and their product. But you are the buyer, right? Always remember that. Most SIEM vendors want to push you through a 3-5 day eval of their technology on their terms, with their guy driving. You already have a product in place so you know the drill. You defined a few use cases important to you, and then the vendor (and their SE) stood the product up and ran through those use cases. They brought in a defined set of activities for each day, and you ended the test with a good idea of how their technology works, right? Actually, wrong. The vendor PoC process is built to highlight their product strengths and hide their weaknesses. We know this from first hand experience – we have built them for vendors in our past roles. Your objective must be to work through your paces, not theirs. To find the warts now – not when you are responding to an incident. It’s wacky that some vendors get scared by a more open PoC process, but their goal is to win the deal, and they put a lot of sweat into scripting their process so it goes smoothly for everyone involved. We hate to say it, but smooth sailing is not the point! The vendor will always say “We can do that!” – it’s your job to find out how well – or how awkwardly. So set up evaluation criteria based on your requirements and use cases. Your criteria don’t need to be complicated. Your requirements should spell out the key capabilities you need, and then plan to further evaluate each challenger based on intangibles such as set-up/configuration, change management, customization, user experience/ease of use, etc. Before you start, have your team assess your current platform as a basis for comparison. As you start the PoC, we recommend you invest in screen capture technology. It’s hard to remember what these tools did and how they did it later – especially after you’ve seen a few of them work through the same procedures. So capture as much video as you can of the user experience – it will come in very handy when you need to make a decision. We’ll discuss that in the next post. Without further ado, let’s jump into the PoC. Stand it up, for reals One of the advantages of testing security management products is that you can actually monitor production systems without worrying about blowing them up, taking them down, or adversely impacting anything. So we recommend you do just that. Plan to pull data from your firewalls, your IDS/IPS systems, and your key servers. Not all devices, of course, but enough to get a feel for how you need to set up the collectors. You will also want to configure a custom data source or two and integrate with your directory store to see how that works. Actually do a configuration and bootstrap the system in your environment. Keep in mind that the PoC is a great time to get some professional services help – gratis. This is part of the sales process for the vendors, so if you want to model out a targeted attack and then enumerate the rules in the system, have the SE teach you how to do it yourself. Then model out another attack and build the rules yourself, without help. The key is to learn how to run the system and to get comfortable – if you do switch you will be living with your choice for a long time. Focus on visualization, your view into the system. Configure some dashboards and see the results. Mess around with the reports a bit. Tighten the thresholds of the alerts. Does the notification system work? Will the alerts be survivable at production levels for years? Is the information useful? These are all things you need to do as part of kicking each challenger’s tires. If compliance is your key requirement use PCI as an example. Start pulling data from your protected network segment. Pump that data through the PCI reporting process. Is the data correct and useful for everybody with an interest? Are the reports comprehensive? Will you need to customize the report for any reason? You need to answer this kind of questions during the PoC. Run a Red Team Run a simulated attack against yourself. We know actually attacking production systems would make you very unpopular with the ops folks, so set up a lab environment. But otherwise, you want this situation to be as realistic as possible. Have attackers breach test systems with attack tools. Have your defenders try to figure out what is going on, as it’s happening. Does the system alert as it should? Will you need to heavily customize the rule set? Can you identify the nature of the attack quickly? Does their super-duper forensic drill-down give you the view you need? The clock is ticking, so how easy is it to use the system to search for clues? Obviously this isn’t a real incident situation, so you’ll take some editorial liberties, and that’s fine. You want a feel for how the system performs in near-real-time. If an attacker is in your systems, will you find them? In time to stop or catch them? Once you know they are there, can you tell what they are doing? A Red Team PoC will help you determine that. Do a Post-Mortem Once you are done with the Red Team exercise, you should have a bunch of data that will make for a nice forensic investigation of what the attack team did, and perhaps what the defense team

Share:
Read Post

Speaking at OWASP: September 22 and 23

Gunnar Peterson and I will be presenting at OWASP September 20-23rd. OWASP AppSec USA will be at the Minneapolis Convention center in – you guessed it – Minneapolis, Minnesota. This year’s theme is “Your life is in the cloud”, so there are plenty of talks on mobile app security and how to weave security into your cloud environment. Gunnar is presenting on Mobile Web Services, discussing mobile application vulnerabilities in the web services layer. I’ll be presenting CloudSec 12-Step, a look at foundational security precautions developers need to consider when building and deploying cloud applications. They have scheduled many other great talks as well. And personally, I am willing to bet autumn weather in Minnesota will be awesome! Okay, perhaps my perspective is skewed – Arizona just set a record for the hottest August in history – some 33 days this summer over 110 degrees – but regardless, Minnesota should be very nice. Come by and check out the presentations. As always, we look forward to seeing friends – shoot us an email if you want to meet up that week. Share:

Share:
Read Post

Incite 9/7/2011: Decisions, Decisions

Making decisions is very hard for most people. Not for me. The Boss and I constantly discuss a single issue over and over again as she debates all aspects of a big decision. I try to be patient, but patience is, uh, not my forte. I know it’s her process and to rush that usually lands me a spot in the doghouse, but it’s still hard to understand. Decisions are easy for me. I do the work, look at the upside and downside, and make the call. Next. I don’t look back either. When I make a decision, I’m pretty confident it’s the right thing to do at that point in time. That’s the key. Any decision any of us make at any time is presumably the best decision right then. 10 minutes or 10 years from now things will have changed. Things always change. The question is how much. Sometimes you’ll find your decisions are wrong. Actually, often your decisions are wrong. Yeah, it’s that human thing. I’ve been known to weigh intuition higher than data in some decisions. Especially relative to my career choices. If it felt right, whatever that means, I would go for it. And I’ve been wrong in those choices, a lot. But I guess I come from the school that says it’s better to do stuff and screw up, than to not do anything – stuck in a cycle of analysis paralysis. I’m sure I’ll have regrets at some point, but it won’t be because I couldn’t make a decision. It’s worth mentioning that I’m not opposed to revisiting a decision, but only if something has changed that affects my underlying assumptions. Lots of folks stew over a decision, poring over the same data over and over again, in an endless cycle of angst and second guessing. If the data doesn’t change, neither should the decision. But these folks figure that if they question themselves constantly for long enough, the decision will become easy. But often, they never achieve peace of mind. Gosh, that has to be hard. I pay a lot more attention to the downside of any decision. In most cases, the worst case scenario is you upset someone or waste time and/or money. Obviously I want to avoid those outcomes where possible, but those are manageable downsides for me. So I don’t obsess over decisions. I make the decision and I move on. Second guessing isn’t productive. Part of life is taking risks and adapting as needed. And cleaning up the inevitable mess when you are wrong. I’m okay with that. -Mike Photo credit: “Lose your sleep before your decision, not after it” originally uploaded by Scott McLeod Incite 4 U Liar, liar, pants on fire: Any time I catch my kids telling me less than the truth, I break into the “Liar, liar” refrain over and over again. Yes, I look stupid, but they hate it even more, so it’s worth doing. One of the (former) Anonymous folks pretty much pinpoints the fundamental skill set of social engineering – lying. Okay, there is grey around lies, but ultimately that’s what it is. Does that make the ability to defend against lies any less important? Of course not. Nor am I judging folks who practice social engineering daily and professionally. But if it walks and quacks like a duck, you might as well call it a duck. – MR Misplaced confidence: There will be a lot written over the next weeks and months over the hack of the Certificate Authority DigiNotar, including a post I’m working on. But if you want to quickly learn a key lesson, check out these highlights from the investigation report – thanks to Ira Victor and the SANS forensics blog. No logging. Flat network. Unpatched Internet-facing systems. Total security fundamentals FAIL. Even better, they kept the breach hidden for a month. The breach probably happened many months earlier than their claimed date. Keep in mind this was a security infrastructure company. You know, the folks who are supposed to be providing a measure of trust on the Internet, and helping others secure themselves. Talk about making most of the mistakes in the book! And BTW – as I’ve said before I know for a fact other security companies have been breached in recent years and failed to disclose. How’s that for boosting consumer confidence? – RM They stole what?: When it come to breach notification laws, California has been at the forefront for more that a decade. Now California has updated its breach disclosure laws in order to disclose additional incident data. Most firms adhering to breach notification laws include so little information that the recipients of a breach notification have no clue what it means to them, nor what steps they need to take in order to protect themselves. Credit monitoring services are more of a red herring – and occasionally a devious revenue opportunity for breached companies to offset notification costs. So California Senate Bill 24 (SB-24) requires companies to include additional information on what happened, and explicitly state what type of data was leaked. Will it help? As usual, it depends on what the company decides to put in the letter, but I don’t have high hopes. Will security vendors be pitching monitoring software to aid companies in identifying what was stolen? Absolutely, but many firms’ legal teams will not be eager to have that data hanging around because it’s often a smoking gun, and they will choose ignorance over security to reduce liability. As they always do. – AL Ethics, hypocrisy, and certifications: You have to hand it to Jericho, one of the drivers of attrition.org. He puts the time in to build somewhat airtight cases, usually turning folks’ words against them in interesting ways. I wouldn’t want to take him on in a debate, that’s for sure. His recent post at Infosec Island, clearly pointing out the hypocrisy of the CISSP folks, is a hoot. As usual, you can find all

Share:
Read Post

Data Security Lifecycle 2.0

We reference this content a lot, so I decided to compile it all into a single post. This is the original content, including internal links, and has not been re-edited. Introduction Four years ago I wrote the initial Data Security Lifecycle and a series of posts covering the constituent technologies. In 2009 I updated it to better fit cloud computing, and it was incorporated into the Cloud Security Alliance Guidance, but I have never been happy with that work. It was rushed and didn’t address cloud specifics nearly sufficiently. Adrian and I just spent a bunch of time updating the cycle and it is now a much better representation of the real world. Keep in mind that this is a high-level model to help guide your decisions, but we think this time around we were able to identify places where it can more specifically guide your data security endeavors. (As a side note, you might notice I use “data security” and “information-centric security” interchangeably. I think infocentric is more accurate, but data security is more recognized, so that’s what I tend to use.) If you are familiar with the previous model you will immediately notice that this one is much more complex. We hope it’s also much more useful. The old model really only listed controls for data in different phases of the lifecycle – and didn’t account for location, ownership, access methods, and other factors. This update should better reflect the more complex environments and use cases we tend to see these days. Due to its complexity, we need to break the new Lifecycle into a series of posts. In this first post we will revisit the basic lifecycle, and in the next post we will add locations and access. The lifecycle includes six phases from creation to destruction. Although we show it as a linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages (for example, not all data is eventually destroyed). Create: This is probably better named Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is the generation of new digital content, or the alteration/updating of existing content. Store: Storing is the act committing the digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Data is exchanged between users, customers, and partners. Archive: Data leaves active use and enters long-term storage. Destroy: Data is permanently destroyed using physical or digital means (e.g., cryptoshredding). These high-level activities describe the major phases of a datum’s life, and in a future post we will cover security controls for each phase. But before we discuss controls we need to incorporate two additional aspects: locations and access devices. Locations and Access In our last post we reviewed the Data Security Lifecycle, but other than some minor wording changes (and a prettier graphic thanks to PowerPoint SmartArt) it was the same as our four-year-old original version. But as we mentioned, quite a bit has changed since then, exemplified by the emergence and adoption of cloud computing and increased mobility. Although the Lifecycle itself still applies to basic, traditional infrastructure, we will focus on these more complex use cases, which better reflect what most of you are dealing with on a day to day basis. Locations One gap in the original Lifecycle was that it failed to adequately address movement of data between repositories, environments, and organizations. A large amount of enterprise data now transitions between a variety of storage locations, applications, and operating environments. Even data created in a locked-down application may find itself backed up someplace else, replicated to alternative standby environments, or exported for processing by other applications. And all of this can happen at any phase of the Lifecycle. We can illustrate this by thinking of the Lifecycle not as a single, linear operation, but as a series of smaller lifecycles running in different operating environments. At nearly any phase data can move into, out of, and between these environments – the key for data security is identifying these movements and applying the right controls at the right security boundaries. As with cloud deployment models, these locations may be internal, external, public, private, hybrid, and so on. Some may be cloud providers, other traditional outsourcers, or perhaps multiple locations within a single data center. For data security, at this point there are four things to understand: Where are the potential locations for my data? What are the lifecycles and controls in each of those locations? Where in each lifecycle can data move between locations? How does data move between locations (via what channel)? Access Now that we know where our data lives and how it moves, we need to know who is accessing it and how. There are two factors here: Who accesses the data? How can they access it (device & channel)? Data today is accessed from all sorts of different devices. The days of employees only accessing data through restrictive applications on locked-down desktops are quickly coming to an end (with a few exceptions). These devices have different security characteristics and may use different applications, especially with applications we’ve moved to SaaS providers – who often build custom applications for mobile devices, which offer different functionality than PCs. Later in the model we will deal with who, but the diagram below shows how complex this can be – with a variety of data locations (and application environments), each with its own data lifecycle, all accessed by a variety of devices in different locations. Some data lives entirely within a single location, while other data moves in and out of various locations… and sometimes directly between external providers. This completes our “topographic map” of the Lifecycle. In our next post we will dig into mapping data flow and controls. In the next few posts we will finish covering background material, and

Share:
Read Post

Security Management 2.0: Vendor Evaluation—Culling the Short List

So far we have discussed a bit of how security management platforms have evolved, how your requirements have changed since you first deployed the platform, and how you need to evaluate your current platform (Part 1, Part 2) in light of both. Now it’s time to get into the meat of the decision process by defining your selection criteria for your Security Management 2.0 platform. Much of defining your evaluation criteria is wading objectively through vendor hyperbole. As technology markets mature (and SIEM is pretty mature), the capabilities of each offering tend to get pretty close. The messaging is very similar and it’s increasingly hard to differentiate one platform from another. Given your unhappiness with your current platform (or you wouldn’t be reading this, right?), it’s important to distill down what a platform does and what it doesn’t, as early in the process as you can. We will look at the vendor evaluation process in two phases. In this post, we’ll help you define a short list of potential replacements. Maybe you use a formal RFP/RFI to cull the 25 companies in the space to 3-5, maybe you don’t. You’ll see soon enough why you can’t run 10 vendors through even the first stage of this process. At the conclusion of the short list exercise, you’ll need to test one or two new platforms during a Proof of Concept, which we’ll detail in the next post. We don’t recommend you skip directly to the test, by the way. Each platform has strengths and weaknesses and just because a vendor happens to be in the right portion of a magical chart doesn’t mean it’s the right choice for you. Do your homework. All of it. Even if you don’t feel like it. Defining the Short List A few aspects of the selection criteria should be evaluated with a broader group of challengers. Think 3-5 at this point. You need to prioritize each of these areas based on your requirements. That’s why you spent so much time earlier defining and gaining consensus on what’s important for replacing your platform. Your main tool in this stage of the process is what we kindly call the dog and pony show. That’s when the vendor brings in their sales folks and sales engineers (SEs) to tell you how their product is awesome and will solve every problem you have. Of course, what they won’t be ready for (unless they read this post as well) is the ‘intensity’ of your KGB-style interrogation techniques. Basically, you know what’s important to you and you need confidence that any vendor passing through this gauntlet (and moving on to the PoC) will be able to meet your requirements. Let’s talk a bit about tactics to get the answers you need, based on the areas where your existing product is lacking (from the platform evaluation). You need to detailed answers during these meetings. This meeting is not a 30 slide PowerPoint and a generic demo. Make sure the challenger understands those expectations ahead of the meeting, so they have right folks in the room. If they bring the wrong people, cross them off the short list. It’s as simple as that – it’s not like you have a lot of time to waste, right? Security: We recommend you put together a scenario as a case study for each challenger. You want to understand how they’d detect an attack based on the information sources they gather and how they configure their rule sets and alerts. Make it detailed, but not totally ridiculous. So basically, dumb down your existing environment a bit and run them through an attack scenario you’ve seen recently. This will be a good exercise for seeing how the data they collect is used to solve a major security management platform major use case, detecting an emerging attack quickly. Have the SE walk you through setting up or customizing a rule. Use your own scenario to reduce the likelihood of the SE having a pre-built rule. You want to really understand how the rules work, because you will spend a lot of time configuring your rules. Compliance: Next, you need to understand what level of automation exists for compliance purposes. Ask the SE to show you the process of preparing for an audit. And no, showing you a list of 2,000 reports, most called PCI X.X is not sufficient. Ask them to produce samples for a handful of critical reports you rely upon to see how closely they hit the mark – you can see the difference between reports developed by an engineer and those created by an auditor. You need to understand where the data is coming from, and hopefully they will have a demo data set to show you a populated report. The last thing you want to learn is that their reports don’t pull from the right data sources two days before an audit. Integration: In this part of the discussion delve into how the product integrates with your existing IT stack. How does the platform pull data from your identity management system? CMDB? What about data collection? Are the connectors pre-built and maintained by the vendor? What about custom connectors? Is there a SDK available, or does it require a bunch of professional services? Forensics: Vendors throw around the term root cause analysis frequently, while rarely substantiating how their tool is used to work through an incident. Have the SE literally walk you through an investigation based on their sample data set. Yes, you’ll test this yourself later, but get a feel for what tools they have built in and how they can be used by the SE who should really know how to use the system. Scalability: If your biggest issue is a requirement for more power, then you’ll want to know (at a very granular level) how each challenger solves the problem. Dive into their data model and their deployment architectures, and have them tell stories about their biggest implementations. If scalability is a

Share:
Read Post

The New Path of Least Resistance

It’s hard to believe it has been 10 years since the 9/11 terrorist attacks on the US. I remember that day like it was yesterday. I actually flew into the Boston airport that morning. In hindsight, those attacks opened our eyes to a previously overlooked attack vector – using a passenger jet as a missile. The folks running national security for the US had all sorts of scenarios for how we could be attacked on our own soil, but I’m not sure that vector was on their lists. It seems we security folks have to start thinking in a similarly orthogonal pattern. Since we started hearing some details of the EMC/RSA breach, and of the attacks on the Comodo and DigiNotar CAs, it has become clear the attackers have been re-thinking their paths of least resistance. Let me back up a bit. Attackers will follow the path of least resistance to their intended target – they always have. Over the past few years, the path of least resistance has clearly involved exploiting both application and user weakness, rather than breaking technical security measures in network infrastructure. Why break down a door if the nincompoop on the other side will just let you in, and key Internet-facing apps don’t even have locks? That’s what we are seeing in practice. If an attacker is trying to breach a soft target, the user and application attack vectors remain the path of least resistance for the foreseeable future. The skills gap between the ends is pretty ugly, and not getting better. That’s why we spend so much time focusing on Reacting Faster and Better – it’s pretty much the only way to survive in an age of inevitable compromise. But what if the target is not soft? By that I mean a well-fortified environment, without the typical user and/or application holes we typically see exploited. A well-segmented and heavily-monitored infrastructure without the standard attack vectors. For example, one of the big defense contractors, who protect the national secrets of the defense/industrial base. Breaking down the doors here is very hard, and in many cases not worth the effort. So the attackers have identified a new low-resistance path – the security infrastructure protecting those hard targets. It was very clear with the RSA attack. That was all about gaining access to the token seeds and using them to compromise the real targets: US defense contractors. Even if RSA was as well-protected as a defense contractor, breaking into RSA once provided a leg up on all the defense contractors using RSA tokens. It’s not as clear with the Comodo or DigiNotar attacks. Those seem to be more politically motivated, but still represent an interesting redefinition of the man in the middle attack: compromising the certificate trust chain that identifies legitimate websites. So what? What impact does this have on day to day operations? Frankly, not much – so many of us are so far behind on basic attempts to block and tackle on the stuff we already know about. But for those hard targets out there, it’s time to expand your threat models to look at the technology that enforces your security controls. I remember attending a Black Hat session a few years back by Tom Ptacek of Matasano, where he discussed his research into compromising pretty well known IT management technology. That’s the kind of analysis we need looking forward. Push vendors to provide information about how they attack their own products and what they find. But don’t expect much. Vendors do not, as a rule, proactively try to poke holes in their own stuff. And if they do, they don’t would admit weakness by admitting it. So be prepared to do (and fund) much of this work yourself. But that’s beside the point. It’s time to start thinking that the new path of least resistance may be your security technology. It’s a challenge to the folks that build security products, as well as to those of you who protect hard targets. Who will rise to this challenge? Photo credit: “Path of Least Resistance” originally uploaded by Billtacular Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.