Securosis

Research

Lessons from LifeLock’s Lucky 13

Much of the buzz around the security industry this week revolved around Wired’s story about LifeLock’s CEO getting his identity stolen not once (which we knew about), but an additional 12 times. Guess 13 is not Todd Davis’ lucky number. Obviously the media blitz posting this guy’s Social Security number on buses, TV, and other mass media made this guy target #1. And the reality is no identity protection network is going to be foolproof for a pretty simple reason. The companies issuing credit don’t always check for fraud alerts, so a fraud alert may not be triggered when a new account is opened. Even if you are religiously monitoring your credit, you are blind until the fraudulent account shows up where you can see it. But what’s troubling to me is the guy didn’t know about the issues until a collection agency came after him. I’m concerned for several reasons, and the blame can be directed everywhere. First to LifeLock, how do you not see 12 new accounts? Hard to believe that none of the accounts showed up on Davis’ credit history. If not, what is the point of their identity protection service again? Also note that none of the 13 transactions were for big numbers. A couple hundred here, a couple hundred there. That’s been my personal experience as well. The fraudsters don’t try to milk personal accounts of thousands at a time because that will set off alarms. They don’t want to be discovered until they are long gone. More disturbing is how the merchants handle most of these situations. In the crazy search for growth at any cost, they cut corners. It’s as simple as that. They don’t check credit ahead of time (or they would have seen the fraud lock). They don’t report new credit accounts to the bureaus (which would have triggered a credit monitoring alert). And they don’t verify addresses when sending bills (which would have shown an inconsistency on the original application). Amazingly enough, a collection agent finds the guy within a hour, but the companies can’t do that over a year. I guess I shouldn’t be surprised, since these big companies just build a ‘shrinkage’ number into their models. They figure a certain percentage of their customers will not pay, either for legitimate or fraudulent reasons. And I guess that’s cheaper than setting up the right processes to prevent a portion of that fraud. Ultimately it’s just economics, but it’s still very disturbing. Buyt if I allowed myself to get into a funk every time a big company did something stupid and harmful, I’d be even grumpier than I already am. So I need to let that go. Though there are things we can and should do to minimize the damage of identity theft. (Try to) Prevent it: OK, you can’t really prevent it. But you can act proactively to minimize your attack surface. That means setting up your own fraud alerts (since the credit bureaus and their lobbyists succeeded in killing the ability for a service to do this for you) and use a credit monitoring service (I use Debix, but there are lots out there). Accept it: Understand that it will happen and there is likely nothing you can do. Getting upset won’t help. You need to be focused and contain the damage. Contain it: As we always say, you need an incident response plan for your business in the event of a breach, but you need a personal incident response plan as well. Who do you call? What steps do you take? Those should be documented and in a place you can get to quickly. You need to act fast, and having a documented process reduces emotion and lets you make the decisions when you’re clear-headed and not rushing. Confirm it: The credit bureaus are a hassle to deal with, but you have to stay on top of them to make sure your credit rating is properly cleaned. The three you need to worry about are Experian, Equifax, and TransUnion. That means checking your credit rating on an ongoing basis and keeping all documentation on the fraudulent use of your accounts. Finally, don’t post personal information on the side of a bus. We know how that turns out. Share:

Share:
Read Post

Oracle Buys Secerno

This morning Oracle announced that it has entered into an agreement to acquire Secerno, the UK-based Database Activity Monitoring firm. Oracle posted a FAQ on the acquisition with some generic data points. Terms of the deal have not been disclosed and, knowing Oracle, won’t be. Many of us in the security industry are chuckling at this purchase as Oracle – at least to customers – has been disparaging Database Activity Monitoring technologies as a whole and pushing Audit Vault as an equivalent solution. But when your database is Unbreakable™, maybe you don’t need a database firewall, eh? Seriously, DAM has been a hole in their security offerings for years, and after much blustering to the contrary, they have finally plugged the hole. And from the synergies of the platforms, I’d say they did a pretty good job of it. Key Points about the Acquisition Here are the most important top-level points: The deal is clearly about the security alerting and blocking features of Secerno. Oracle calls it a “Database Firewall”, and never says Database Activity Monitoring. Oracle sees Audit Vault as their DAM equivalent, and has heavily disparaged that market and the techniques used by DAM vendors. Customers really struggle with Oracle patching, which makes it very difficult to keep systems compliant and secure. Positioning Secerno as a stopgap to protect the database from particular exploits so you have time to patch is reasonable and appropriate. It’s also a good straight up security play. Secerno was always stronger on security than activity monitoring for compliance, which makes it more complementary to the existing Oracle product line and security messaging. Oracle may include this in Oracle Advanced Security, or keep it standalone. We’ll have to see, but based on the current physical architecture I’d bet on stand-alone for at least a few years. In terms of messaging, expect Audit Vault to remain the focus for building those audit trails, with Secerno positioned for real-time alerting and blocking. Expect to see Oracle market “Database Firewall” with “Zero False Positives”, but those claims overlook the real world difficulties in building and maintaining query rules. Let’s delve deeper into the specifics. What the Acquisition Does for Oracle Fills big technology gaps: Secerno provides Oracle a lot of security technology they did not have. Secerno includes real-time analysis not available from current Oracle products, which is a growing requirement – especially for customer-facing web applications. It also gives Oracle a security tool that offers genuine heterogenous database support for Oracle, Microsoft, and Sybase (IBM support is in beta). Oracle hates to admit it, but nearly all of their enterprise clients have several different databases in use, and customers want a common platform for security or compliance when possible. Secerno provides blocking capabilities – importantly before queries reach the database – to reduce DB load and risk. Secerno has a much better UI than Oracle Audit Vault, and hopefully Oracle will continue to use it rather than standardize on their own weaker UI. Prevention: Privately we have been calling Secerno a Query White Listing technology, as we think that better encompasses what they provide. “Database Firewall” is one of those throw-away marketing terms used by several DAM vendors, but fails to differentiate what Secerno provides. Yes, Secerno will block queries, and will do so before they get to the database, reducing processing and filtering load on the database engine. I’ll get into technology details later in this post, but Oracle now has a viable way to block many unwanted queries. Web Applications: Like it or not, web applications are a huge part of the Oracle database business, and auditing is totally inappropriate for securing web applications from things like SQL injection. This helps address Oracle’s repeated issues with patching and playing catch-up with vulnerabilities, finally helping prevent some attacks without totally disrupting business operations for database updates that applications don’t support. Circumvents a perception problem: Oracle Audit still has a serious perception problem, and correctly or not is considered a performance and operations burden. On paper, Oracle’s native audit trail can provide many of the same functions as other DAM and Auditing tools, but in practice Oracle Audi pales in the light of the competition – or even Audit Vault. This helps escape serious a perception problem for compliance and security adoption. What This Means to the DAM Market Validation: Let’s face it – when Oracle and IBM both make investments into Database Activity Monitoring, we are past wondering when DAM will be considered viable technology. Even though Oracle isn’t positioning this as DAM, Secerno did, and this serves as high-profile validation of the market. Business to be won: There were many unhappy IPLocks customers who Fortinet was unable to bring into the fold with their upgraded offerings. Some of Guardium’s business has been at risk for a while, and some of their resellers started looking for other relationships after the IBM purchase. Oracle’s customers have looked at – and in many cases purchased – other security products to close the gaps. Imperva still needs to do a better job of converting WAF customers to DB Security customers, and Application Security still needs to do a better job at holding onto the customers they already have. All this shows that the leader of this segment has yet to be determined, and there is a lot of potential business. One less vendor: Tizor went to Netezza. IPLocks went to Fortinet. Guardium went to IBM. Now Secerno to Oracle. That leaves Application Security and Imperva as the major database security providers out there, with Sentrigo the best of the smaller niche players in the market. EMC needs this technology next, perhaps followed by Symantec or McAfee, but the price of entry just increased. Investors: Secerno’s investors, Amadeus Capital Parners, must be happy. They did a logical reset and re-investment back in early 2008, a decision that was clearly the right one. They also had considerably less initial investment than the competitors in this space. While we do not

Share:
Read Post

Australian Border Security Insanity

Australia is my second-favorite place on the planet to visit (New Zealand is first). But it’s a darn good thing I’m not a porn fiend, since they now require you to declare porn at the border, and, well, here’s a quote: Australian customs officers have been given new powers to search incoming travellers’ laptops and mobile phones for pornography, a spokeswoman for the Australian sex industry says. … Fiona Patten, president of the Australian Sex Party, is demanding an inquiry into why a new question appears on Incoming Passenger Cards asking people if they are carrying “pornography”. They are also working on a big Internet filter. You know, kind of like China and many Middle East countries. Gotta love democracy. (Thanks to Slashdot for the pointer). Share:

Share:
Read Post

Privacy Is (Still) Personal

I want to respond to something Adam wrote about Facebook over at Emergent Chaos, but first I’m going to excerpt my own article from TidBITS: Privacy is Personal – In the Information Age, determining what you want others to know about you isn’t always a simple decision. Aside from the potential tradeoffs of avoiding particular features or services, we all have different thresholds for what we are comfortable sharing. It’s also extremely difficult to control our information even when we do make informed decisions, and often impossible to eradicate information that escaped our control before we realized the rules of the game had changed. For example, I use both Amazon and Netflix, even though those services also collect personal information like my buying and viewing habits. I am trading my data (and money) for a combination of convenience and personalization. I’m less concerned with these services than Facebook since their privacy practices and policies are clearer, my information is compartmentalized within each service, and they have much more consistent and stable records. On the other hand I have minimized my usage of Google services due to privacy concerns. Google’s reach is incredibly expansive, and despite their addition of Google Dashboard to help show some of what they record, and much clearer policies than Facebook, I’m generally uncomfortable with any single company or government having that much potential information on me. I fully understand this is a somewhat emotional response. Facebook is building a similar Internet-wide ecosystem as they expand connections to external Web sites and services. In exchange for allowing them access to your information and activities, Facebook enables new kinds of services and personalization. The question each of us must answer is if those new services and personalization options are worth the privacy tradeoff. Deciding where to draw your own privacy lines is a very personal, complex, and even sometimes arbitrary decision. I trust Amazon and Netflix to a certain extent based on their privacy policies, even though they sometimes make mistakes (I didn’t use Amazon for years after a policy change that they later reversed). Yet I’ve limited my usage of both Google and Facebook due to general concerns (Google) or outright distrust (Facebook). Facebook, to me, is a tool to keep me connected to friends and family I don’t interact with on a daily basis. I restrict what information it has on me, and always assume anything I do on Facebook could be public. I’m willing to trade a little privacy for the convenience of being able to stay connected with an expanded social circle. I manage Facebook privacy by not using it for anything that’s actually private. Adam has a lot in his article, and I think his criticisms of my original post come down to: Your perceptions of your own privacy change within different contexts and over time, so what you are okay with today may not be acceptable tomorrow. If you only use the service to post things you’d want public anyway, why use it at all? I completely agree with Adam’s first point – what you share when you are 19 years old at college is very different than what you might want people to know about you once you are 35. Even things you might share at 35 as a member of the workforce might come back to haunt you when you are 55 and running for political office. But I disagree that this means your only option is to completely opt out of all centralized social media services. I believe we as society are reaching the point where some degree of social networking is the norm. Even “private” communications like email, IM, and SMS are open to potential disclosure and subsequent inclusion in public search results. The same used to be true of the written and spoken word, but clearly the scale and scope are dramatically larger in the Information Age. We are losing the insular layers that created our current social norms of privacy – which already vary around the world. The last time society needed to adapt to such changes in privacy was with the Industrial Age and movement from rural to urban society. Before that, it was probably the change from hunter/gatherers to an agrarian society. I see three possible scenarios that could develop: Society adopts a combination of laws and social mores to better protect privacy. It will be expected that you own your own data, and in the future retain a right to edit your past. Essentially, we work to protect our current expectations of privacy – which will require active effort, as the terrain has already shifted under us, and will continue to do so. Social expectations change. You’ll be able to run for political office and no one will care that you called some chick or dude hot and joined the “I love some stupid emo vampire” movement. We gain better abilities to protect our privacy, but at the same time society becomes more accepting of greater personal information being public – partially through sheer boredom at the inanity and popularity of our embarrassing peccadilloes. There is no privacy. We have many years before these issues resolve, if ever, and it’s going to be a rough road no matter where we are headed. The end result probably won’t match any of my scenarios, but will instead be some mish-mash of those options and others I haven’t thought of. My rough guess is that society will slowly become more accepting of youthful indiscretions (or we won’t have anyone to hire or elect), but we will also gain more control over our personal information. Privacy isn’t dead, but it is definitely changing. We all need to make personal decisions about the level of risk we are willing to accept in the midst of changing social norms, government/business influence, and degrees of control. Share:

Share:
Read Post

Quick Wins with DLP Webcast Next Week

Next week I will be giving a webcast to complement my Quick Wins with Data Loss Prevention paper. This is a bit different than when I usually talk about DLP – it’s focused on showing immediate value, while also positioning for long term success. Like the paper it’s sponsored by McAfee. We’re holding it at 11am PT on May 25, and you can register by clicking here. Here’s the full description: Quick Wins with DLP – How to Make DLP Work for You Date: May 25, 2010 Time: 11am PDT / 2pm EDT When used properly, Data Loss Prevention (DLP) provides rapid identification and assessment of data security issues not available with any other technology. However, when not optimized, two common criticisms of DLP are 1) its complexity and 2) the fear of false positives. Security professionals often worry that DLP is expensive and will fail to deliver the expected value. A little knowledge and some planning go a long way towards a fast, simple, and effective deployment. By taking some straightforward best practice steps, you can realize significant immediate value and security gains without negatively impacting your productivity or wasting valuable resources. In this webcast you will learn how to: Establish a flexible incident management process Integrate with major infrastructure components Assess broad information usage Set a foundation for future focused efforts and policy tuning You will also hear how Continuum Health Partners safeguards highly sensitive patient data with McAfee DLP 9. Join us for this informative presentation. Presenters: Rich Mogull, Analyst & CEO, Securosis, LLC Mark Moroses, Assistant CIO, Continuum Health Partners John Dasher, Senior Director, Data Protection, McAfee Share:

Share:
Read Post

Friday Summary: May 21, 2010

For a while now I’ve been lamenting the decline in security blogging. In talking with other friends/associates, I learned I wasn’t the only one. So I finally got off my rear and put together a post in an effort to try kickstarting the community. I don’t know if the momentum will last, but it seems to have gotten a few people back on the wagon. Alan Shimel reports he’s had about a dozen new people join the Security Blogger’s Network since my post (although in that post he only lists the first three, since it’s a couple days old). We’ve also had some old friends jump back into the fray, such as Andy the IT Guy, DanO, LoverVamp, and Martin. One issue Alan and I talked about on the phone this week is that since Technorati dropped the feature, there’s no good source to see everyone who is linking to you. The old pingbacks system seems broken. If anyone knows of a good site/service, please let us know. Alan and I are also exploring getting something built to better interconnect the SBN. It’s hard to have a good blog war when you have to Tweet at your opponent so they know they’re under attack. Another issue was highlighted by Ben Tomhave. A lot of people are burnt out, whether due to the economy, their day jobs, or general malaise and disenchantment with the industry. I can’t argue too much with his point, since he’s not the only semi-depressed person in our profession. But depression is a snowballing disorder, and maybe if we can bring back some energy people will get motivated again. Anyway, I’m psyched to see the community gearing back up. I won’t take it for granted, and who knows if it will last, but I for one really hope we can set the clock back and party like it’s 2007. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich will be on NPR’s Science Friday today! Talking about Facebook and privacy. It’s on at 3 PM ET, and yes, it’s going to his head. Adrian’s TechTarget article on DAM. Implementing database monitoring for 201 CMR 17 compliance. Anton covers Rich’s Secure360 presentation. How to Protect Your Privacy from Facebook. Rich goes pretty in-depth in this TidBITS article on Facebook privacy. Favorite Securosis Posts Adrian Lane: Oracle’s Acquisition of Secerno. Mike Rothman: Is Twitter Making Us Dumb? Bloggers, Please Come Back. Get off the Twitter and think full thoughts. Please. Rich: Symantec’s Identity Crisis. Other Securosis Posts Quick Wins with DLP Webcast Next Week. Privacy is (Still) Personal. Australian Border Security Insanity. Lessons from LifeLock’s Lucky 13. How to Survey Data Security Outcomes? Incite 5/19/2010: Benefits of Bribery. Understanding and Selecting SIEM/LM: Business Justification. Talking Database Assessment with Imperva. FireStarter: Killing the Next Generation. Favorite Outside Posts Rich: Anton has a compliance epiphany He gets it. Compliance is only a force to change the economics in a non-self-correcting system. Adrian Lane: What The Internet Knows About You Very interesting look at the security implications of web browser caching. Mike Rothman: Presenting the humble ukulele: Jake Shimabukuro wows TEDxTokyo Who thought a ukulele could be so cool? But this is really about managing expectations…. (I think I saw him play live at a Jimmy Buffett show –Rich) Project Quant Posts DB Quant: Planning Metrics (Part 4). DB Quant: Planning Metrics (Part 3): Planning for Monitoring. DB Quant: Planning Metrics (Part 2). DB Quant: Planning Metrics (Part 1). Research Reports and Presentations Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts WordPress Attacks Ongoing. Fraud Bazaar Carders.cc Hacked. Feds seek feedback on “game changing” R&D ideas. Commercial Quantum Cryptography System Hacked. Hardware Lockdown Initiative Cracks Down On Cloning, Counterfeiting. Andy the IT Guy with a great policy post. If you’re going to the Cloud, seek the advice of an expert. Technical details of the Street View WiFi payload controversy This shouldn’t be a controversy. Rob Graham explains why. Heartland Settles with MasterCard. Local utility fined for SCADA security violations. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Pablo, in response to How to Survey Data Security Outcomes? In terms of control effectiveness, I would suggest to incorporate another section aside from ‘number of incidents’ where you question around unknowns and things they sense are all over the place but have not way of knowing/controlling. I’ll break out my comment in two parts: 1 – “philosophical remarks” and 2 – suggestions on how to implement that in your survey 1 – “philosophical remarks” If you think about it, effectiveness is the ability to illustrate/detect risks and prevent bad things from happening. So, in theory, we could think of it as a ratio of “bad things understood/detected” over “all existing bad things that are going on or could go on” (by ‘bad things’ I mean sensitive data being sent to wrong places/people, being left unprotected, etc. – with ‘wrong/bad’ being a highly subjective concept) So in order to have a good measure of effectiveness we need both the ‘numerator’ (which ties to your question on ‘number of incidents’) and also a ‘denominator’ The ‘denominator’ could be hard to get at, because, again, things are highly subjective, and what constitutes ‘sensitive’ changes in the view of not only the security folks, but more importantly, the business. (BTW, I have a slight suggestion on your categories that I include at the bottom of this post) However, I believe it is important that we get a sense of this ‘denominator’ or at least the perception of this ‘denominator’. My own personal opinion on this, by speaking to select CISOs is they feel things are ‘all over the place’ (i.e., the denominator is quite quite large). 2 – Suggestions on how to implement that in your survey (We had to cut this quote for space,

Share:
Read Post

Symantec’s Identity Crisis

*Updated:** 8/25/2010 Storefront-Backtalk magazine had an interesting post on Too Much Encrypt = Cyberthief Gift. And when I say ‘interesting’, I mean the topics are interesting, but the author (Walter Conway) seems to have gotten most of the facts wrong in an attempt to hype the story. The basic scenario the author describes is correct: when you encrypt a very small range of numbers/values, it is possible to pre-compute (encrypt) all of those values, then match them against the encrypted values you see in the wild. The data may be encrypted, but you know the contents because the encrypted values match. The point the author is making is that if you encrypt the expiration date of a credit card, an attacker can easily guess the value. OK, but what’s the problem? The guys over at Voltage hit the basic point on the head: it does not compromise the system. The important point is that you cannot derive the key from this form of attack. Sure, you can you confirm the contents of the enciphered text. This is not really an attack on the encryption algorithm, nor the key, but poorly deployed cryptography. It’s one of the interesting aspects of encryption or hashing functions; if you make the smallest of changes to the input, you get a radically different output. If you add randomness (Updated: per Jay’s comments below, this was not clear; Initialization Vector or feedback modes for encryption) or even somewhat random “salting” (for hashing) we have an effective defense against rainbow tables, dictionary attacks, and pattern matching. In an ideal world we would do this. It’s possible some places don’t … in commodity hardware, for example. It did dawn on me that this sort of weakness lingers on in many Point of Sale terminals that sell on speed and price, not security. These (relatively) cheap appliances don’t usually implement the best security: they use the fastest rather than the strongest cryptography, they keep key lengths short, they don’t do a great job at gathering randomness, and generally skimp on the mechanical aspects of cryptography. They also are designed for speed, low cost, and generic deployments: salting or concatenation of PAN with the expiration date is not always an option, or significant adjustments to the outbound data stream would raise costs. But much of the article talks about data storage, or the back end, and not the POS system. The premise of “Encrypting all your data may actually make you more vulnerable to a data breach” is BS. It’s not an issue of encrypting too much, it’s in those rare cases where you encrypt in very small digestible fields. “Encrypting all cardholder data that not only causes additional work but may actually make you more vulnerable to a data breach” is total nonsense. If you encrypt all of the data, especially if you concatenate the data, the resulting ciphertext does not suffer from the described attack. Further, I don’t believe that “Most retailers and processors encrypt their entire cardholder database”, making them vulnerable. If they encrypt the entire database, they use transparent encryption, so the data blocks are encrypted as whole elements. The block contents are random so each has some degree of natural randomness going on because the database structure and pointers are present. And if they are using application layer or field level encryption, they usually salt alter the initialization vector. Or concatenate the entire record. And that’s not subject to a simple dictionary attack, and in no way produces a “Cyberthief Gift”. Share:

Share:
Read Post

Incite 5/19/2010: Benefits of Bribery

Don’t blink – you might miss it. No I’m not talking about my prowess in the bedroom, but the school year. It’s hard to believe, but Friday is the last day of school here in Atlanta. What the hell? It feels like a few weeks ago we put the twins’ name tags on, and put them on the bus for their first day of kindergarten. The end of school also means it’s summertime. Maybe not officially, but it’s starting to feel that way. I do love the summer. The kids do as well, and what’s not to love? Especially if you are my kids. There is the upcoming Disney trip, the week at the beach, the 5-6 weeks of assorted summer camp(s), and lots of fun activities with Mom. Yeah, they’ve got it rough. Yet we still face the challenge of keeping the kids grounded when they are faced with a life of relative abundance. Don’t get me wrong, I know how fortunate I am to be able to provide my kids with such rich experiences as they grow up. But XX1 got our goats over the weekend, when one of her friends got an iPod touch for her birthday. Of course, her reaction was “Why can’t I have an iPod touch, all my friends have them?” Thankfully the Boss was there, as I doubt I would have responded well to that line of questioning. She calmly told XX1 that with an attitude like that, she’ll be lucky if we don’t take away all her toys. And that she needs to be grateful for what she has, not focused on what she doesn’t. To be clear, not all of her friends have iPod touches. She is prone to exaggeration, like her Dad. What she doesn’t know is our plan to give her a hand-me down iPhone once we upgrade this summer. (Of course I’m upgrading, come on, now!) I think we need to tie it to some kind of achievement. Maybe if she works hard on her school exercises over the summer. Or is nice to her sister (yes, that is a problem). Or whatever kind of behavior we want to incent at any given time. There’s nothing like having a big anchor over her head to drag out every time she misbehaves. That’s right, it’s a bribe. I’m sure there are better ways than bribery to get the kids to do what we want. I’m just not sure what they are, and nothing we’ve tried seems to work like putting that old carrot out there and waiting for Pavlov to work his magic. – Mike. Photo credits: “Unplug for safety” originally uploaded by mag3737 Incite 4 U Where is the Blog Love? – I’m going to break the rules and link to one of my own posts. On Monday I called out the decline of blogging. Basically, people have either moved to Twitter or left the community discussion completely. Twitter is great, but it can’t replace a good blog war. In response, Andy the IT Guy, DanO, and LoverVamp jumped back on the scene. These are 3 sites I used to read every day (and still do, when they are updated) and maybe we can start rebuilding the community. Why is that important? Because blogs provide a more nuanced, permanent archive of knowledge with more reasoned debate than Twitter, however wonderful, can sustain. – RM Critical Infrastructure Condition Critical – We all take uninterrupted power for granted. Yet, we security folks understand how vulnerable the critical infrastructure is to cyber-attacks. Dark Reading has an interesting interview with with Joe Weiss, who has written a book about how screwed we are. A lot of the discussions sound very similar to every other industry that requires the regulatory fist of God to come crashing down before they fix anything. And NERC CIP is only a start, since it exempts the stuff that is really interesting, like networks and the actual control systems. Unfortunately it will take a massive outage caused by an attack to change anything. But we all know that because we’ve seen this movie before. – MR Desktop, The Way You Want It – I am a big fan of desktop virtualization, and I am surprised it has gotten such limited traction. I think people view it ass backwards. The label “dumb terminal” is in the back of people’s minds, and that not a progressive model. But desktop virtualization is much, much more than a refresh of the dumb terminal model. The ability to contain the work environment in a virtual server makes things a heck of a lot easier for IT, and benefits the employee, who can access a fully functional desktop from anywhere inside – and possibly outside – the company. Citrix giving each employee $2,100 to buy their own computer for work is a very smart idea. The benefits to Citrix are numerous. Every employee gets to pick the computer they want, for better or worse, and they are now invested in their choice, rather than considering a work laptop to be a disposable loaner. The work environment is kept safe in a virtual container, and employees still get fully mobile computing. Every user becomes a tester for the company’s desktop virtualization environment, bringing diverse environments under the microscope. And it shows how they can blend work and home environments, without compromising one for the other. This is a good move and makes sense for SMB and enterprise computing environments. – AL Security 5.0 – HTML5 is coming down the pipe, and Veracode has some great advice on what to keep an eye on from a security perspective. Not to show my age, but I remember hand-coding sites in HTML v1, and how exciting it was when things like JavaScript started appearing. Any time we have one of these major transitions we see security issues crop up, and as you start leveraging all the new goodness it never hurtss to start looking at security early in

Share:
Read Post

How to Survey Data Security Outcomes?

I received a ton of great responses to my initial post looking for survey input on what people want to see in a data security survey. The single biggest request is to research control effectiveness: which tools actually prevent incidents. Surveys are hard to build, and while I have been involved with a bunch of them, I am definitely not about to call myself an expert. There are people who spend their entire careers building surveys. As I sit here trying to put the question set together, I’m struggling for the best approach to assess outcome effectiveness, and figure it’s time to tap the wisdom of the crowd. To provide context, this is the direction I’m headed in the survey design. My goal is to have the core question set take about 10-15 minutes to answer, which limits what I can do a bit. Section 1: Demographics The basics, much of which will be anonymized when we release the raw data. Section 2: Technology and process usage I’ll build a multi-select grid to determine which technologies are being considered or used, and at what scale. I took a similar approach in the Project Quant for Patch Management survey, and it seemed to work well. I also want to capture a little of why someone implemented a technology or process. Rather than listing all the elements, here is the general structure: Technology/Process Not Considering Researching Evaluating Budgeted Selected Internal Testing Proof of Concept Initial Deployment Protecting Some Critical Assets Protecting Most Critical Assets Limited General Deployment General Deployment And to capture the primary driver behind the implementation: Technology/Process Directly Required for Compliance (but not an audit deficiency) Compliance Driven (but not required) To Address Audit Deficiency In Response to a Breach/Incident In Response to a Partner/Competitor Breach or Incident Internally Motivated (to improve security) Cost Savings Partner/Contractual Requirement I know I need to tune these better and add some descriptive text, but as you can see I’m trying to characterize not only what people have bought, but what they are actually using, as well as to what degree and why. Technology examples will include things like network DLP, Full Drive Encryption, Database Activity Monitoring, etc. Process examples will include network segregation, data classification, and content discovery (I will tweak the stages here, because ‘deployment’ isn’t the best term for a process). Section 3: Control effectiveness This is the tough one, where I need the most assistance and feedback (and I already appreciate those of you with whom I will be discussing this stuff directly). I’m inclined to structure this in a similar format, but instead of checkboxes use numerical input. My concern with numerical entry is that I think a lot of people won’t have the numbers available. I can also use a multiselect with None, Some, or Many, but I really hate that level of fuzziness and hope we can avoid it. Or I can do a combination, with both numerical and ranges as options. We’ll also need a time scale: per day, week, month, or year. Finally, one of the tougher areas is that we need to characterize the type of data, its sensitivity/importance, and the potential (or actual) severity of the incidents. This partially kills me, because there are fuzzy elements here I’m not entirely comfortable with, so I will try and constrain them as much as possible using definitions. I’ve been spinning some design options, and trying to capture all this information without taking a billion hours of each respondent’s time isn’t easy. I’m leaning towards breaking severity out into four separate meta-questions, and dropping the low end to focus only on “sensitive” information – which if lost could result in a breach disclosure or other material business harm. Major incidents with Personally Identifiable Information or regulated data (PII, credit cards, healthcare data, Social Security Numbers). A major incident is one that could result in a breach notification, material financial harm, or high reputation damage. In other words something that would trigger an incident response process, and involve executive management. Major incidents with Intellectual Property (IP). A major incident is one that could result in material financial harm due to loss of competitive advantage, public disclosure, contract violation, etc. Again, something that would trigger incident response, and involve executive management. Minor incidents with PII/regulated data. A minor incident would not result in a disclosure, fines, or other serious harm. Something managed within IT, security, and the business unit without executive involvement. Minor incidents with IP. Within each of these categories, we will build our table question to assess the number of incidents and false positive/negative rates: Technology Incidents Detected Incidents Blocked Incidents Mitigated (incident occurred but loss mitigated) Incidents Missed False Positive Detected Per Day Per Month Per Year N/A There are some other questions I want to work in, but these are the meat of the survey and I am far from convinced I have it structured well. Parts are fuzzier than I’d like, I don’t know how many organizations are mature enough to even address outcomes, and I have a nagging feeling I’m missing something important. So I could really use your feedback. I’ll fully credit everyone who helps, and you will all get the raw data to perform your own analyses. Share:

Share:
Read Post

Understanding and Selecting SIEM/LM: Business Justification

It’s time to resume our series on Understanding and Selecting a SIEM/Log Management solution. We have already discussed what problems this technology solves, with Use Cases 1 & Use Cases 2, but that doesn’t get a project funded. Next we need to focus on making the business case for the project and examine how to justify the investment in bean counter lingo. End User Motivations and Business Justification Securosis has done a lot of work on the motivation for security investments. Unfortunately our research shows budgets are allocated to visceral security issues people can see and feel, rather than being based on critical consideration of risks to the organization. In other words, it’s much harder to get the CEO to sign off on a six-figure investment when you can’t directly demonstrate a corresponding drop in profit or an asset loss. Complicating matters in many cases, such as the theft of a credit card, it’s someone else who suffers the loss. Thus compliance and/or regulation is really the only way to justify investments to address the quiet threats. The good news relative to SIEM and Log Management is the technology is really about improving efficiency by enhancing the ability to deal with the mushrooming amount of data generated by network and security devices. Or being able to detect an attack designed to elude a firewall or IPS (but not both). Or even making reporting and documentation (for compliance purposes) more efficient. You can build a model to show improved efficiency, so of all security technologies – you’d figure SIEM/Log Management would be pretty straight forward to justify. Of course, putting together a compelling business justification does not always result in a funded project. Remember when money gets tight (and when is money not tight?) sometimes it’s easier to flog employees to work harder, as opposed to throwing high dollar technology at the problem. Yes, the concept of automation is good, but quantifying the real benefits can be challenging. Going Back to the Well Our efforts are also hamstrung by a decade of mis-matched expectations relative to security spending. Our finance teams have seen it all, and in lots of cases haven’t seen the tangible value of the security technology. So they are justifiably skeptical relative to yet another ROI model showing a two week payback on a multi-million dollar investment. Yes, that’s a bit facetious, but only a bit. When justifying any investment, we need to ensure not to attempt to measure what can’t be accurately measured, which inevitably causes the model to collapse under its own cumbersome processes and assumptions. We also need to move beyond purely qualitative reasoning, which produces hard to defend results. Remember that security is an investment that produces neither revenue nor fully quantifiable results, thus trying to model it is asking for failure. Ultimately having both bought and sold security technology for many years, we’ve come to the conclusion that end user motivations can be broken down pretty simply into two buckets: Save Money or Make Money. Any business justification needs to very clearly show the bean counters how the investment will either add to the top line or help improve the bottom line. And that argument is far more powerful than eliminating some shadowy threat that may or may not happen. Although depending on the industry, implementing log management (in some form) is not optional. There are regulations (namely PCI) that specifically call out the need to aggregate, parse and analyze log files. So the point of justification becomes what kind of infrastructure is needed, at what level of investment – since solutions range from free to millions of dollars. To understand where our economic levers are as we build the justification model, we need to get back to the use cases (Part 1, Part 2), and show how these can justify the SIEM/Log Management investments. We’ll start with the two use cases, which are pretty straight forward to justify because there are hard costs involved. Compliance Automation The reality is most SIEM/Log Management projects come from the compliance budget. Thus _compliance automation is a “must do” business justification because regulatory or compliance requirements must be met. These are not options. For example, if your board of directors mandates new Sarbanes-Oxley controls, you are going to implement them. If your business accepts credit cards on Internet transactions, you are going to comply with PCI data security standard. But how to you justify a tool to make the compliance process more efficient? Get our your stop watch and start tracking the time it takes you to prepare for these audits. Odds are you know how long it took to get ready for your last audit, the auditor is going to continue looking over your shoulder – asking for more documentation on policies, processes, controls and changes. The business case is based on the fact that the amount of time it takes to prepare for the audit is going to continue going up and you need automation to keep those costs under control. Whether the audit preparation budget gets allocated for people or tools shouldn’t matter. So you pay for SIEM/Log Management with the compliance budget, but the value accrues to both the security function and streamlines operations. Sounds like a win/win to us. Operational Efficiency Our next use case is about improving efficiency and this is relatively straightforward to justify. If you look back at the past few years, the perimeter defenses of your organization have expanded significantly. This perimeter sprawl is due to purpose-built devices being implemented to address specific attack vectors. Think email gateway, web filter, SSL VPN, application aware firewall, web application firewall, etc. All of which have a legitimate place in a strong perimeter. Specifically each device requires management to set policies, monitor activity, and act on potential attacks. The system itself requires time to learn, time to manage, and time to update. which requires people and additional people aren’t really in the spending plan nowadays. Operational efficiency means less time

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.