Securosis

Research

Incite 7/14/2010: Mello Yello

I’m discovering that you do mellow with age. I remember when I first met the Boss how mellow and laid back her Dad was. Part of it is because he doesn’t hear too well anymore, which makes him blissfully unaware of what’s going on. But he’s also mellowed, at least according to my mother in law. He was evidently quite a hothead 40 years ago, but not any more. She warned me I’d mellow too over time, but I just laughed. Yeah, yeah, sure I will. But sure enough, it’s happening. Yes, the kids still push my buttons and make me nuts, but most other things just don’t get me too fired up anymore. A case in point: the Securosis team got together last week for another of our world domination strategy sessions. On the trip back to the airport, I heard strange music. We had rented a Kia Soul, with the dancing hamsters and all, so I figured it might be the car. But it was my iPad cranking music. WTF? What gremlin turned on my iPad? Took me a few seconds, but I found the culprit. I carry an external keyboard with the iPad and evidently it turned on, connected to the Pad, and proceeded to try to log in a bunch of times with whatever random strings were typed on the keyboard in my case. Turns out the security on the iPad works – at least for a brute force attack. I was locked out and needed to sync to my computer in the office to get back in. I had my laptop, so I wasn’t totally out of business. But I was about 80% of the way through Dexter: Season 2 and had planned to watch a few more episodes on the flight home. Crap – no iPad, no Dexter. Years ago, this would have made me crazy. Frackin’ security. Frackin’ iPad. Hate hate hate. But now it was all good. I didn’t give it another thought and queued up for an Angry Birds extravaganza on my phone. Then I remembered that I had the Dexter episodes on my laptop. Hurray! And I got an unexpected upgrade, with my very own power outlet at my seat, so my mostly depleted battery wasn’t an issue. Double hurray!! I could have made myself crazy, but what’s the point of that? Another situation arose lately when I had to diffuse a pretty touchy situation between friends. It could have gotten physical, and therefore ugly with long-term ramifications. But diplomatic Mike got in, made peace, and positioned everyone to kiss and make up later. Not too long ago, I probably would have gotten caught up in the drama and made the situation worse. As I was telling the Boss the story, she deadpanned that it must be the end of the world. When I shot her a puzzled look, she just commented that when I’m the voice of reason, armageddon can’t be too far behind. – Mike. Photo credits: “mello yello” originally uploaded by Xopher Smith Recent Securosis Posts School’s out for Summer Taking the High Road Friday Summary: July 9 2010 Top 3 Steps to Simplify DLP Without Compromise Preliminary Results from the Data Security Survey Tokenization Architecture – The Basics NSO Quant: Enumerate and Scope Sub-Processes Incite 4 U Since we provided an Incite-only mailing list option, we’ve started highlighting our other weekly posts above. One to definitely check out is the Preliminary Results from the Data Security Survey, since there is great data in there about what’s happening and what’s working. Rich will be doing a more detailed analysis in the short term, so stay tuned for that. You can’t be half global… – Andy Grove (yeah, the Intel guy) started a good discussion about the US tech industry and job creation. Gunnar weighed in as well with some concerns about lost knowledge and chain of experience. I don’t get it. Is Intel a US company? Well, it’s headquartered in the US, but it’s a global company. So is GE. And Cisco and Apple and IBM and HP. Since when does a country have a scoreboard for manufacturing stuff? The scoreboard is on Wall Street and it’s measured in profit and loss. So big companies send commodity jobs wherever they find the best mix of cost, efficiency, and quality. We don’t have an innovation issue here in the US – we have a wage issue. The pay scales of some job functions in the US have gone way over their (international) value, so those jobs go somewhere else. Relative to job creation, free markets are unforgiving and skill sets need to evolve. If Apple could hire folks in the US to make iPhones for $10 a week, I suspect they would. But they can’t, so they don’t. If the point is that we miss out on the next wave of innovation because we don’t assemble the products in the US, I think that’s hogwash. These big companies have figured out sustainable advantage is moving out of commodity markets. Too bad a lot of workers don’t understand that yet. – MR Tinfoil hats – Cyber Shield? Really? A giant monitoring project ? I don’t really understand how a colossal systems monitoring project is going to shield critical IT infrastructure. It may detect cyber threats, but only if they know what they are looking for. The actual efforts are classified, so we can’t be sure what type of monitoring they are planning to do. Maybe it’s space alien technology we have never seen before, implemented in ways we could never have dreamed of. Or maybe it’s a couple hundred million dollars to collect log data and worry about analysis later. Seriously, if the goal here is to protect critical infrastructure, here’s some free advice: take critical systems off the freaking’ Internet! Yeah, putting these systems on the ‘Net many years ago was a mistake because these organizations are both naive and cheap. Admit the mistake and spend your $100M

Share:
Read Post

Simple Ideas to Start Improving the Economics of Cybersecurity

Today Howard Schmidt meets with Secretary of Commerce Gary Locke and Department of Homeland Security Secretary Janet Napolitano to discuss ideas for changing the economics of cybersecurity. Howard knows his stuff, and recognizes that this isn’t a technology problem, nor something that can be improved with some new security standard or checklist. Crime is a function of economics, and electronic crime is no exception. I spend a lot of time thinking about these issues, and here are a few simple suggestions to get us started: Eliminate the use of Social Security Numbers as the primary identifier for our credit history and to financial accounts. Phase the change in over time. When the banks all scream, ask them how they do it in Europe and other regions. Enforce a shared-costs model for credit card brands. Right now, banks and merchants carry nearly all the financial costs associated with credit card fraud. Although PCI is helping, it doesn’t address the fundamental weaknesses of the current magnetic stripe based system. Having the card brands share in losses will increase their motivation to increase the pace of innovation for card security. Require banks to extend the window of protection for fraudulent transactions on consumer and business bank accounts. Rather than forcing some series of fraud detection or verification requirements, making them extend the window where consumers and businesses aren’t liable for losses will motivate them to make the structural changes themselves. For example, by requiring transaction confirmation for ACH transfers over a certain amount. Within the government, require agencies to pay for incident response costs associated with cybercrime at the business unit level, instead of allowing it to be a shared cost borne by IT and security. This will motivate individual units to better prioritize security, since the money will come out of their own budgets instead of being funded by IT, which doesn’t have operational control of business decisions. Just a few quick ideas to get us started. All of them are focused on changing the economics, leaving the technical and process details to work themselves out. There are two big gaps that aren’t addressed here: Critical infrastructure/SCADA: I think this is an area where we will need to require prescriptive controls (air gaps & virtual air gaps) in regulation, with penalties. Since that isn’t a pure economic incentive, I didn’t include it above. Corporate intellectual property: There isn’t much the government can do here, although companies can adopt the practice of having business units pay for incident response costs (no, I don’t think I’ll live to see that day). Any other ideas? Share:

Share:
Read Post

Home Business Payment Security

We have covered this before, but every now and again I run into a new slant on who bears responsibility for online transaction safety. Bank? Individual? If both, where do the responsibilities begin and end? Over the last year a few friends, ejected from longtime professions due to the current economic depression, have started online businesses. A couple of these individuals did not even know what HTML was last year – but now they are building web sites, starting blogs and … taking credit cards online. It came as a surprise to several of these folks when their payment processors fined them, or disrupted service entirely because they had failed a remote security audit. It seems that the web site itself passed its audit with a handful of cautionary notices that the auditor recommended they address. What failed was the management terminal – their home computer, used to dial into the account, had several severe issues. What made my friend aware that there was a problem at all was extra charges on his bill for, in essence, having crappy security. What a novel idea to raise awareness and motivate merchants! I applaud providing the resources to the merchants to help secure their environments. I also worry that this is a method for payment processors to “pass the buck” and lower their own security obligations. That’s probably because I am a cynic by nature, which is why I ended up in security, but that’s a different story. Not having started a small business that takes credit cards online, I was ignorant of many measures payment processors are taking to raise the bar for security on end-user systems. They are sending out guidance on the basic security measures, conducting assessments, providing results, and suggesting additional security measures. In fact, the list of suggested security improvements that the processor – or processor’s service provider – suggested looks a lot like what is covered in a PCI self assessment questionnaire. Firewall rules, use of admin accounts, egress filtering, and so on. I thought this was pretty cool! But on the other side of the equation, all the credit card billing is happening on the web site, without them ever collecting credit card numbers. Good idea? Overkill? These precautions are absolutely overwhelming for most people. Especially like one-person shops like my friends operate. They have absolutely no idea what a TCP reset is, or why they failed the test for it. They have never heard of egress filtering. But they are looking into home office security measures just like large retail merchants. Part of me thinks they need to have this basic understanding if they are going to conduct commerce online. Another part of me thinks they are being set up for failure. I spent about 40 minutes on the phone today, giving one friend some guidance. My first piece of advice was to get a virtual environment set up and make sure he used it for banking and banking only. Then I focused on how to pass the audit. My goal was in this conversation was: Not overwhelm him with technical jargon and concepts that he simply did not, and would not, understand. Get him to pass the next audit with minimum effort on his part, and without having to buy any new hardware or software. Call his ISP, bank, and payment processor and wring out of them any tools and assistance they could provide. Turn on the basic Windows firewall and basic router security. Honestly, the second item was the most important. Despite this person being really smart, I did not have any faith that he could set things up correctly – certainly not the first time, and perhaps not ever. So I, like many, just got him to where he could “check the box”. I just advised someone to do the minimum to pass a pseudo-PCI audit. sigh I’ll be performing penance for the rest of the week. Share:

Share:
Read Post

Tokenization Architecture: The Basics

Fundamentally, tokenization is fairly simple. You are merely substituting a marker of limited value for something of greater value. The token isn’t completely valueless – it is important within its application environment – but that value is limited to the environment, or even a subset of that environment. Think of a subway token or a gift card. You use cash to purchase the token or card, which then has value in the subway system or a retail outlet. That token has a one to one relationship with the cash used to purchase it (usually), but it’s only usable on that subway or in that retail outlet. It still has value, we’ve just restricted where it has value. Tokenization in applications and databases does the same thing. We take a generally useful piece of data, like a credit card or Social Security Number, and convert it to a local token that’s useless outside the application environment designed to accept it. Someone might be able to use the token within your environment if they completely exploit your application, but they can’t then use that token anywhere else. In practical terms, this not only significantly reduces risks, but also (potentially) the scope of any compliance requirements around the sensitive data. Here’s how it works in the most basic architecture: Your application collects or generates a piece of sensitive data. The data is immediately sent to the tokenization server – it is not stored locally. The tokenization server generates the random (or semi-random) token. The sensitive value and the token are stored in a highly-secured and restricted database (usually encrypted). The tokenization server returns the token to your application. The application stores the token, rather than the original value. The token is used for most transactions with the application. When the sensitive value is needed, an authorized application or user can request it. The value is never stored in any local databases, and in most cases access is highly restricted. This dramatically limits potential exposure. For this to work, you need to ensure a few things: That there is no way to reproduce the original data without the tokenization server. This is different than encryption, where you can use a key and the encryption algorithm to recover the value from anywhere. All communications are encrypted. The application never stores the sensitive value, only the token. Ideally your application never even touches the original value – as we will discuss later, there are architectures and deployment options to split responsibilities; for example, having a non-user-accessible transaction system with access to the sensitive data separate from the customer facing side. You can have one system collect the data and send it to the tokenization server, another handle day to day customer interactions, and a third for handling transactions where the real value is needed. The tokenization server and database are highly secure. Modern implementations are far more complex and effective than a locked down database with both values stored in a table. In our next posts we will expand on this model to show the architectural options, and dig into the technology itself. We’ll show you how tokens are generated, applications connected, and data stored securely; and how to make this work in complex distributed application environments. But in the end it all comes down to the basics – take something of wide value and replacing it with a token with restricted value. Understanding and Selecting a Tokenization Solution: Part 1, Introduction Part 2, Business Justification Share:

Share:
Read Post

Preliminary Results from the Data Security Survey

We’ve seen an absolutely tremendous response to the data security survey we launched last month. As I write this we are up to 1,154 responses, with over 70% of respondents completing the entire survey. Aside from the people who took the survey, we also received some great help building the survey in the first place (especially from the Security Metrics community). I’m really loving this entire open research thing. We’re going to close the survey soon, and the analysis will probably take me a couple weeks (especially since my statistics skills are pretty rudimentary). But since we have so much good data, rather than waiting until I can complete the full analysis I thought it would be nice to get some preliminary results out there. First, the caveats. Here’s what I mean by preliminary: These are raw results right out of SurveyMonkey. I have not performed any deeper analysis on them, such as validating responses, statistical analysis, normalization, etc. Later analysis will certainly change the results, and don’t take these as anything more than an early peek. Got it? I know this data is dirty, but it’s still interesting enough that I feel comfortable putting it out there. And now to some of the results: Demographics We had a pretty even spread of organization sizes: Organization size Less than 100 101-1000 1001-10000 10001-50000 More than 50000 Response Count Number of employees/users 20.3% (232) 23.0% (263) 26.4% (302) 17.2% (197) 13.2% (151) 1,145 Number of managed desktops 25.8% (287) 26.9% (299) 16.4% (183) 10.2% (114) 1,113 36% of respondents have 1-5 IT staff dedicated to data security, while 30% don’t have anyone assigned to the job (this is about what I expected, based on my client interactions). The top verticals represented were retail and commercial financial services, government, and technology. 54% of respondents identified themselves as being security management or professionals, with 44% identifying themselves as general IT management or practitioners. 53% of respondents need to comply with PCI, 48% with HIPAA/HITECH, and 38% with breach notification laws (seems low to me). Overall it is a pretty broad spread of responses, and I’m looking forward to digging in and slicing some of these answers by vertical and organization size. Incidents Before digging in, first a major design flaw in the survey. I didn’t allow people to select “none” as an option for the number of incidents. Thus “none” and “don’t know” are combined together, based on the comments people left on the questions. Considering how many people reviewed this before we opened it, this shows how easy it is to miss something obvious. On average, across major and minor breaches and accidental disclosures, only 20-30% of respondents were aware of breaches. External breaches were only slightly higher than internal breaches, with accidental disclosures at the top of the list. The numbers are so close that they will likely be within the margin of error after I clean them. This is true for major and minor breaches. Accidental disclosures were more likely to be reported for regulated data and PII than IP loss. 54% of respondents reported they had “About the same” number of breaches year over year, but 14% reported “A few less” and 18% “Many less”! I can’t wait to cross-tabulate that with specific security controls. Security Control Effectiveness This is the meat of the survey. We asked about effectiveness for reducing number of breaches, severity of breaches, and costs of compliance. The most commonly deployed tools (of the ones we surveyed) are email filtering, access management, network segregation, and server/endpoint hardening. Of the data-security-specific technologies, web application firewalls, database activity monitoring, full drive encryption, backup tape encryption, and database encryption are most commonly deployed. The most common write-in security control was user awareness. The top 5 security controls for reducing the number of data breaches were DLP, Enterprise DRM, email filtering, a content discovery process, and entitlement management. I combined the three DLP options (network, endpoint, and storage) since all made the cut, although storage was at the bottom of the list by a large margin. EDRM rated highly, but was the least used technology. For reducing compliance costs, the top 5 rated security controls were Enterprise DRM, DLP, entitlement management, data masking, and a content discovery process. What’s really interesting is that when we asked people to stack rank their top 3 most effective overall data security controls, the results don’t match our per-control questions. The list then becomes: Access Management Server/endpoint hardening Email filtering My initial analysis is that in the first questions we focused on a set of data security controls that aren’t necessarily widely used and compared between them. In the top-3 question, participants were allowed to select any control on the list, and the mere act of limiting themselves to the ones they deployed skewed the results. Can’t wait to do the filtering on this one. We also asked people to rank their single least effective data security control. The top (well, bottom) 3 were: Email filtering USB/portable media encryption or device control Content discovery process Again, these correlate with what is most commonly being used, so no surprise. That’s why these are preliminary results – there is a lot of filtering/correlation I need to do. Security Control Deployment Aside from the most commonly deployed controls we mentioned above, we also asked why people deployed different tools/processes. Answers ranged from compliance, to breach response, to improving security, and reducing costs. No control was primarily deployed to reduce costs. The closest was email filtering, at 8.5% of responses. The top 5 controls most often reported as being implemented due to a direct compliance requirement were server/endpoint hardening, access management, full drive encryption, network segregation, and backup tape encryption. The top 5 controls most often reported as implemented due to an audit deficiency are access management, database activity monitoring, data masking, full drive encryption, and server/endpoint hardening. The top 5 controls implemented for cost savings were reported as email filtering, server/endpoint hardening, access management, DLP, and

Share:
Read Post

Taking the High Road

This is off topic but I need to vent a bit. I’ve followed the LeBron James free-agency saga with amusement. Thankfully I was in the air last night during the “Decision” TV special, so I didn’t have any temptation to participate in the narcissistic end of a self-centered two weeks. LeBron and his advisors did a masterful job of playing the media, making them believe anything was possible, and then doing the smartest thing and heading to Miami to join the Heat. First off, I applaud all three All-Stars, who made economic sacrifices to give themselves a chance to win. They all understand that a ball-player’s legacy is not about how much money they made, but how many championships they won. Of course, the economic sacrifices are different for them – you know, whether to settle for $2 or $3 million less each year. Over 6 years that is big money, but they want to win and win now. But that’s not what I want to talk about. I want to talk about how the Cavaliers’ owner, Dan Gilbert, responded to the situation. It makes you think about the high road versus the low road. Clearly Gilbert took the low road, basically acting like a spoiled child whose parents said they couldn’t upgrade to the iPhone 4. He had a tantrum – calling LeBron names and accusing him of giving up during the playoffs. The folks at the Bleacher report hit it right on the head. I understand this guy’s feelings are hurt. LeBron (and his advisors) played him like a fiddle. They gave him hope that LeBron would stay, even though at the surface it would be a terrible decision – if the goal is to win championships. Over the past 8 years, LeBron doubled the net worth of the Cavs franchise, and that is the thanks he gets from the owner. Can you see Bob Kraft of the Patriots having a similar tantrum? Or any of the top owners in the sport? Yes, Dan Gilbert really reflected the mood of his town. His frustration at losing the LeBron-stakes aligns with the prospect of losing a lot more in years to come. But as an owner, as the face of your franchise, you have to take the high road. You get a Cleveland sports columnist to write the hit piece making all those speculations. But, you (and the rest of the franchise) need to act with class. Have the PR folks write a short statement thanking your departing star for 8 great years of sell-outs, wishing him the best of luck, and saying you look forward to seeing them in the Eastern Conference finals. Most of all, you take a step back and you don’t say anything. That’s what I try to tell the kids when they are upset. And try to practice myself (failing most of the time, by the way). Don’t say anything because you’ll only make it worse and say something you’ll regret. I’m sure folks in Cleveland are happy with Dan Gilbert’s outburst, but the rest of the country sees a total ass having a tantrum in public. And overnight he made LeBron into a sympathetic figure. Which is probably what LeBron and his advisors wanted the entire time. Damn, that’s one smart power forward, probably enjoying the view from the high road. Share:

Share:
Read Post

Friday Summary: July 9, 2010

Today is the deadline for RSA speaker submissions, so the entire team was scrambling to get our presentation topics submitted before the server crash late rush. One of the things that struck me about the submission suggestions is that general topics are discouraged. RSA notes in the submission guidelines that 60% of the attendees have 10 or more years of security experience. I think the idea is that, if your audience is more advanced, introductory or general audience presentations don’t hold the audience’s attention so intermediate and advanced sessions are encouraged. And I bet they are right about that, given the success of other venues like BlackHat, Defcon, and Security B-Sides. Still, I wonder if that is the right course of action. Has security become a private club? Are we so caught up in the security ‘echo chamber’ we forget about the mid-market folks without the luxury of full-time security experts on staff? Perhaps security just is not very interesting without some novel new hack. Regardless, it seems like it’s the same group of us, year after year, talking about the same set of issues and problems. From my perspective software developers are the weakest link in the security chain. Most coders don’t have 10 years of security experience. Heck, they don’t have two! Only a handful of people I know have been involved in secure code development practices for 10 years or more. But developers coming up to speed with security is one of the biggest wins, and advanced security topics may not be inaccessible to them. The balancing act between cutting-edge security discussions that keep researchers up to date, versus educating the people who can benefit most, is at issue. I was thinking about this during our offsite this week while Rich and Mike talked about having their kids trained in martial arts when they are old enough. They were talking about how they want the kids to be able to protect themselves when necessary. They were discussing likely scenarios and what art forms they felt would be most useful for, well, not getting their asses kicked. And they also want the kids to derive many of the same secondary benefits of respect, commitment, confidence, and attention to detail many of us unwittingly gained when our parents sent us to martial arts classes. As the two were talking about their kids’ basic introduction to personal security, it dawned on me that this is really the same issue for developers. Not to be condescending and equate coders to children, but what was bugging me was the focus on the leaders in the security space at the expense of opening up the profession to a wider audience. Basic education on security skills doesn’t just help build up a specific area of education every developer needs – the entire approach to secure code development makes for better programmers. It reinforces the basic development processes we are taught to go through in a meaningful way. I am not entirely sure what the ‘right’ answer is, but RSA is the biggest security conference, and application developers seem to be a very large potential audience that would greatly benefit from basic exposure to general issues and techniques. Secure code development practices and tools are, and hopefully will remain for the foreseeable future, a major category for growth in security awareness and training. Knowledge of these tools, processes, and techniques makes for better code. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Adrian in Essentials Guide to Data Protection. Rich on Top Three Steps to Simplify DLP without Compromise. Rich quoted at Dark Reading on University Data Breaches. Favorite Securosis Posts Adrian Lane: School’s out for Summer. Mike Rothman: Incite 7/7/2010: The Mailbox Vigil. Normally I don’t toot my own horn, but this was a good deal of analysis. Fairly balanced and sufficiently snarky… David Mortman: School’s out for Summer. Rich: Understanding and Selecting SIEM/LM: Selection Process. Other Securosis Posts Uh, not so much. Favorite Outside Posts Adrian Lane: Atlanta Has Dubious Honor of Highest Malware Infection Rate. This was probably not meant to be humorous, but the map of giant bugs just cracked me up. Does this help anyone? Rich: Top Apps Largely Forgo Windows Security Protections. There is more to vulnerability than the operating system. We can only hope these apps get on board with tactics that will make them (and us) harder to pwn. Mike Rothman: RiskIT – Does ISACA Suffer from Dunning-Kruger?. Hutton is at it again, poking at another silly risk management certification. I’m looking forward to my “Apparently OK” Risk Certificate arriving any day now. Pepper: HSBC mailing activated debit cards. And to make it better, they didn’t agree that this is a serious problem. David Mortman: The New Distribution of The 3-Tiered Architecture Changes Everything. Project Quant Posts DB Quant: Protect Metrics, Part 2, Patch Management. DB Quant: Manage Metrics, Part 1, Configuration Management. DB Quant: Protection Metrics, Part 4, Web Application Firewalls. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Regional Trojan Threat Targeting Online Banks. Is Breaking a CAPTCHA a crime? A little more on Cyberwar. Blog Comment of the Week This week’s winner is … no one. We had a strong case of ‘blog post fail’ so I guess we cannot expect comments. Share:

Share:
Read Post

Top 3 Steps to Simplify DLP without Compromise

Just when I thought I was done talking about DLP, interest starts to increase again. Below is an article I wrote up on how to minimize the complexity of a DLP deployment. This was for the Websense customer newsletter/site, but is my usual independent perspective. One of the most common obstacles to a DLP deployment is psychological, not technical. With massive amounts of content and data streaming throughout the enterprise in support of countless business processes, the idea that we can somehow wrangle this information in any meaningful way, with minimal disruptions to business process, is daunting if not nigh on inconceivable. This idea is especially reinforced among security professionals still smarting from the pain of deploying and managing the constant false positives and tuning requirements of intrusion detection systems. Since I started to cover DLP technologies about 7 years or so ago, I’ve talked with hundreds of people who have evaluated and/or deployed data loss prevention. Over the course of those conversations I’ve learned what tends to work, what doesn’t, and how to reduce the potential complexity of DLP deployments. Once you break the process down it turns out that DLP isn’t nearly as difficult to manage as some other security technologies, and even very large organizations are able to rapidly reap the benefits of DLP without creating time-consuming management nightmares. The trick, as you’ll see, is to treat your DLP deployment as an incremental process. It’s like eating an elephant – you merely have to take it one bite at a time. Here are my top 3 tips, drawn from those hundreds of conversations: 1. Narrow your scope: One of the most common problems with an initial DLP deployment is trying to start on too wide a scale. Your scope of deployment is defined by two primary factors – how many DLP components you deploy, and how many systems/employees you monitor. A full-suite DLP solution is capable of monitoring network traffic, integrating with email, scanning stored data, and monitoring endpoints. When looking at your initial scope, only pick one of the components to start with. I usually recommend starting with anything other than endpoints, since you then have fewer components to manage. Most organizations tend to start on the network (usually with email) since it’s easy to deploy in a passive mode, but I do see some companies now starting with scanning stored data due to regulatory requirements. In either case, stick with one component as you develop your initial policies and then narrow the scope to a subset of your network or storage. If you are in a mid-sized organization you might not need to narrow too much, but in large organizations you should pick a subnet or single egress point rather than thinking you have to watch everything. Why narrow the scope? Because in our next step we’re going to deploy our policies, and starting with a single component and a limited subset of all your traffic/systems provides the information you need to tune policies without being overwhelmed with incidents you feel compelled to manage. 2. Start with one policy: Once you’ve defined your initial scope, it’s time to deploy a policy. And yes, I mean a policy, not many policies. The policy should be narrow and align with your data protection priorities; e.g. credit card number detection or a subset of sensitive engineering plans for partial document matching. You aren’t trying to define a perfect policy out of the box; that’s why we are keeping our scope narrow. Once you have the policy ready, go ahead and launch it in monitoring mode. Over the course of the next few days you should get a good sense of how well the policy works and how you need to tune it. Many of you are likely looking for similar kinds of information, like credit card numbers, in which case the out of the box policies included in your DLP product may be sufficient with little to no tuning. 3. Take the next bite: Once you are comfortable with the results you are seeing it’s time to expand your deployment scope. Most successful organizations start by expanding the scope of coverage (systems scanned or network traffic), and then add DLP components to the policy (storage, endpoint, other network channels). Then it’s time to start the process over with the next policy. This iterative approach doesn’t necessarily take very long, especially if you leverage out of the box policies. Unlike something like IDS you gain immediate benefits even without having to cover all traffic throughout your entire organization. You get to tune your policies without being overwhelmed, while managing real incidents or exposure risks. Share:

Share:
Read Post

School’s out for Summer

I saw an interesting post on InformationWeek about protecting your network and systems from the influx of summer workers. The same logic goes for the December holidays – when additional help is needed to stock shelves, pack boxes, and sell things. These temporary folks can do damage – more because they have no idea what they can/should do rather than thanks to any malicious intent. I’m not a big fan of some of the recommendations in the post. Like not providing Internet access. Common sense needs to rule the day, right? Someone in a warehouse doesn’t need corporate Internet access. But someone working in the call center might. It depends on job function. But in reality, you don’t need to treat the temporary workers any different than full-time folks. You just need to actually do the stuff that you should be doing anyway. Here are a couple examples: Training: Yes, it seems a bit silly to spend a few hours training temporary folks when they will leave in a month or two. On the other hand, it seems silly to have these folks do stupid things and then burn up your summer cleaning up after them. Lock down machines: You have more flexibility to lock down devices for temporary workers, so do that. Whether it’s a full lockdown (using application white listing) or a lighter application set (using the application control stuff in the endpoint suite), either reduces the likelihood of your users doing something stupid, and of damage if they do. Segment the network: If possible (and it should be), it may make sense to put these users on a separate network, again depending on their job functions. If they need Internet access, maybe give them a VPN pipe directly to the outside and restrict access to internal networks and devices. Monitor Everything: Yes, you need to stay on your toes. Make sure you are looking for anomalous behavior and focused on reacting faster. We say that a lot, eh? So again, workers come and go, but your security strategy should cover different scenarios. You can make some minor changes to factor in temporary work, but these folks cannot get a free pass and you need constant vigilance. Same old, same old. Share:

Share:
Read Post

Incite 7/7/2010: The Mailbox Vigil

The postman (or postwoman) doesn’t really get any love. Not any more. In the good old days, we’d always look forward to what goodies the little white box truck, with the steering wheel on the wrong side, would bring. Maybe it was a birthday card (with a check from Grandma). Or possibly a cool catalog. Or maybe even a letter from a friend. Nowadays the only thing that comes in the mail for me is bills. Business checks go to Phoenix. The magazines to which I still stupidly subscribe aren’t very exciting. I’ve probably read the interesting articles on the Internet already. The mail is yet another casualty of that killjoy Internet thing. But not during the summer. You see, we’ve sent XX1 (that’s our oldest daughter) off to sleepaway camp for a month. It’s her first year and she went to a camp in Pennsylvania, not knowing a soul. I’m amazed by her bravery in going away from home for the first time to a place she’s never been (besides a two-hour visit last summer), without any friends. It’s not bravery like walking into a bunker filled with armed adversaries or a burning house to save the cat, but her fearlessness amazes us. I couldn’t have done that at 9, so we are very proud. But it’s also cold turkey from a communication standpoint. No phone calls, no emails, no texts. We know she’s alive because they post pictures. We know she is happy because she has a grin from ear to ear in every picture. So that is comforting, but after 9 years of incessant chatter, it’s a bit unsettling to not hear anything. The sound of silence is deafening. At least until the twins get up, that is. We can send her letters and the camp has this online system, where we log into a portal and type in a message, and they print it out and give it to her each day. Old school, but convenient for us. That system is only one way. The only way we receive communication from her is through the mail. Which brings us back to our friend the postman. Now the Boss rushes to the mailbox every day to see whether XX1 has sent us a letter. Most days we get nada. But we have gotten two postcards thus far (she’s been gone about 10 days), each in some kind of hieroglyphics not giving us much information at all. And we even gave her those letter templates that ask specific questions, like “What did you do today?” and “What is your favorite part of camp?” As frustrating as it is to get sparse communication, I know (from my camp experience) that it’s a good sign. The kids that write Tolstoian letters home are usually homesick or having a terrible time. So I can be pragmatic about it and know that in another 3 weeks the chatter will start again and I’ll get to hear all the camp stories… 100 times. But the Boss will continue her mailbox vigil each day, hoping to get a glimpse of what our daughter is doing, how she’s feeling, and the great time she’s having. And I don’t say a word because that’s what Moms do. – Mike. Photo credits: “happy to receive mail” originally uploaded by Loving Earth Recent Securosis Posts Know Your Adversary IBM gets a BigFix for Tivoli Endpoint Management Tokenization: The Business Justification Understanding and Selecting SIEM/LM: Advanced Features Understanding and Selecting SIEM/LM: Integration Understanding and Selecting SIEM/LM: Selection Process Friday Summary: July 1, 2010 Incite 4 U The ethics of malware creation – The folks at NSS Labs kind of started a crap storm when they dropped out of the AMTSO (anti-malware testing standards organization) and started publishing their results, which were not flattering to some members of the AMTSO. Then the debate migrated over to the ethics of creating malware for testing purposes. Ed at SecurityCurve does a good job of summarizing a lot of the discussion. To be clear, it’s a slippery slope and I can definitely see both sides of the discussion, especially within the context of the similar ethical quandary around developing new diseases. I come down on the side of doing whatever I can to really test my defenses, and that may mean coming up with interesting attacks. Obviously you don’t publish them in the wild and the payload needs to be inert, but to think that the bad guys aren’t going to figure it out eventually is naive. Unfortunately we can’t depend on everyone to act responsibly when they find something, so we have to assume that however the malware was originated, it will become public and weaponized. And that means we get back to basics. Right, react faster/better and contain the damage. – MR Mission to (Replace) MARS – When Cisco announced last year they weren’t supporting third party network and security devices on their MARS analysis platform, it was a clear indication that the product wasn’t long for the world. Of course, that started a feeding frenzy in the SIEM/Log Management world with all 25 vendors vying to get into Cisco’s good graces and become a preferred migration path, whatever that means. Finally Cisco has announced who won the beauty content by certifying 5 vendors who did some kind of interoperability testing, including ArcSight, RSA, LogLogic, NetForensics, and Splunk. Is this anything substantial? Probably not. But it does give sales reps something to talk about. And in a pretty undifferentiated market fighting for displacements, that isn’t a bad thing. – MR More goodies for your pen testing bag – Yes, we are big fans of hacking yourself, and that usually requires tools – open source, or commercial, or hybrid doesn’t matter. Sophisticated folks leverage memory analysis, reverse engineering apps and/or application scanners. The good news is there are no lack of new tools showing up to make the job of the pen tester easier. First hat tip goes to Darknet, who points

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.