Securosis

Research

Friday Summary, January 14, 2011

Apparently I got out of New York just in time. The entire eastern seaboard got “Snowmageddon II, the Blanketing” a few hours after I left. Despite a four-legged return flight, I did actually make it back to Phoenix. And Phoenix was just about the only place in the US where it was not snowing, as I heard there was snow in 48 states simultaneously. I was in NYC for the National Retail Federation’s 100th anniversary show. It was my first. I was happy to be invited, as my wife and her family have been in retail for decades, and I was eager to speak at a retail show. And this was the retail show. I have listened to my family about retail security for 20 years, and it used to be that their only security challenge was shrinkage. Now they face just about every security problem imaginable, as they leverage technology in every facet of operations. Supply chain, RFID, POS, BI systems, CRM, inventory management, and web interfaces are all at risk. On the panel were Robert McMillion of RSA and Peter Engert of Rooms to Go. We were worried about filling an hour and a half slot, and doubly anxious about whether anyone would show up to talk about security on a Sunday morning. But the turnout was excellent, with a little over 150 people, and we ended up running long. Peter provided a pragmatic view of security challenges in retail, and Robert provided a survey of security technologies retail merchants should consider. It was no surprise that most of the questions from the audience were on tokenization and removal of credit cards. I get the feeling that every merchant who can get rid of credit cards – those who have tied the credit card numbers to their database primary keys – will explore tokenization. Oddly enough, I ended up talking with tons of people at the hotel and its bar, more than I did at the conference itself. People were happy to be there. I guess they they were there for the entire week of the show, and very chatty. Lots of marketing people interested in talking about security, which surprised me. And they had heard about tokenization and wanted to know more. My prodding questions about POS and card swipe readers – basically: when will you upgrade them so they are actually secure – fell on deaf ears. Win some, lose some, but I think it’s healthy that data security is a topic of interest in the retail space. One last note: as you can probably tell, the number of blog entries is down this week. That’s because we are working on the Cloud Security Alliance Training Course. And fitting both the stuff you need to know and the stuff you need to pass the certification test into one day is quite a challenge. Like all things Securosis, we are applying our transparent research model to this effort as well! So we ask that you please provide feedback or ask questions about any content that does not make sense. I advise against asking for answers to the certification test – Rich will give you some. The wrong ones, but you’ll get them. Regardless, we’ll post the outlines over the next few days. Check it out! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR post on Vodafone’s breach. Rich quoted in the Wall Street Journal. Adrian at the National Retail Federation Show, telling the audience they suck at security. Did I say that? Mike, talkin’ to Shimmy about Dell, brand damage, and the Security Bloggers meet-up Favorite Securosis Posts Rich: The Data Breach Triangle. We didn’t push out a lot of content this week so I’m highlighting an older post. In line with Gunnar’s post on where we spend, I find it interesting that the vast majority of our security spending focuses on ingress… which in many ways is the toughest problem to solve. Mike Rothman: What do you want to see in the first CSA Training Course? Yes, we have a murder’s row of trainers. And you should go. But first tell us what needs to be in the training… David Mortman: What Do You Want to See in the First Cloud Security Alliance Training Course? Gunnar Peterson: What Do You Want to See in the First Cloud Security Alliance Training Course? Sensing a theme here? Adrian Lane: Mobile Device Security: 5 Tactics to Protect Those Buggers. Other Securosis Posts Funding Security and Playing God. Incite 1/12/2011: Trapped. Favorite Outside Posts Rich: Gunnar’s back of the envelope. Okay, I almost didn’t pick this one because I wish he wrote it for us. But although the numbers aren’t perfect, it’s hard to argue with the conclusion. Mike Rothman: Top 10 Things Your Log Managment Vendor Won’t Tell You. Clearly there is a difference between what you hear from a vendor and what they mean. This explains it (sort of)… David Mortman: Incomplete Thought: Why Security Doesn’t Scale…Yet.. Damn you @Beaker! I had a section on this very need in the upcoming CSA training. And, of course, you said it far better…. Adrian Lane: Can’t decide between this simple explanation of the different types of cloud databases, and this pragmatic look at cloud threats. Gunnar Peterson: Application Security Conundrum by Jeremiah Grossman, with honorable mention to The Virtues of Monitoring. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts China CERT: We Missed Report On SCADA Hole . SAP buying SECUDE. TSA Worker Gets 2 Years for Planting

Share:
Read Post

Funding Security and Playing God

I was reading shrdlu’s post on Connecting the risk dots over on the Layer 8 blog. I thought the point of contention was how to measure cost savings. Going back and reading the comments, that’s not it at all. “we can still show favorable cost reduction by spotting problems and fixing early.” You have to PROVE it’s a problem first … This is why “fixing it now vs fixing it sooner” is a flawed argument. The premise is that you MUST fix, and that’s what executives aren’t buying. We have to make the logic work better. She’s right. Executives are not buying in, but that’s because they don’t want to. They don’t want to comply with SOX or pay their taxes either, but they do it anyway. If your executives don’t want to pay for security testing, use a judo move and tell them you agree; but the next time the company builds software, do it without QA. Tell your management team that they have to PROVE there is a problem first. Seriously. I call this the “quality architect conundrum”. It’s so named because a certain CEO (who shall remain nameless) raised this same argument every time I tried to hire an architect who made more than minimum wage. My argument was “This person is better, and we are going to get better code, a better product, and happier customers. So he is worth the additional salary.” He would say “Prove it.” Uh, yeah. You can’t win this argument, so don’t head down that path. Follow my reasoning for a moment. For this scenario I play God. And as God, I know that the two architectural candidates for software design are both capable of completing the project I need done. But I also know that during the course of the development process, Architect A will make two mistakes, and Architect B will make 8. They are both going to make mistakes, but how many and how badly will vary. Some mistakes will be fixed in design, some will be spotted and addressed during coding, and some will be found during QA. One will probably be with us forever because we did not see the limitation early enough and we be stuck. So as God I know which architect would get the job done with fewer problems, resulting in less work and less time wasted. But then again, I’m God. You’re not. You can’t prove one choice will cause fewer problems before they occur. What we discover, being God or otherwise, is that from design through the release cycles a) there will be bugs, and b) there will be security issues. Sorry, it’s not optional. If you have to prove that there is a problem so you can fund security you are already toast. You build it in as a requirement. Do we really need to prove Deming was right again? It has been demonstrated many times, with quantifyable metrics, that finding issues earlier in the product development cycle reduces at large costs to an organization. I have demonstrated, within my own development teams, that fixing a bug found by a customer is an order of magnitude more expensive than finding and fixing it in house. While I have see diminishing returns on some types of security testing investments, and some investments work out better than others, I found no discernable difference in the cost of security bugs vs. those having to do with quality or reliability. Failing deliberately, in order to justify action later, is still failure. Share:

Share:
Read Post

What Do You Want to See in the First Cloud Security Alliance Training Course?

It leaked a bit over Twitter, but we are pretty excited that we hooked up with the Cloud Security Alliance to develop their first training courses. Better yet, we’re allowed to talk about it and solicit your input. We are currently building two courses for the CSA to support their Cloud Computing Security Knowledge (CCSK) certification (both of which will be licensed out to training organizations). The first is a one day CCSK Enhanced class which we will be be delivering the Sunday before RSA. This includes the basics of cloud computing security, aligned with the CSA Guidance and ENISA Rick documents, plus some hands-on practice and material beyond the basics. The second class is the CCSK Review, which will be a 3-hour course optimized for online delivery and to prep you for the CCSK exam. We don’t want to merely teach to the book, so we are structuring the course to cover all the material in a way that makes more sense for training. Here is our current module outline with the person responsible and their Twitter handle in case you want to send them ideas: Introduction and Cloud Architectures. (Domain 1; Mike Rothman; @securityincite) Creating and securing a public cloud instance. (Domains 7 & 8; David Mortman; @mortman) Securing public cloud data. (Domains 5 & 11; Adrian Lane; @adrianlane) Securing cloud users and applications (Domains 10 & 12; Gunnar Peterson; @oneraindrop) Managing cloud computing security and risk (Domains 6 & 9 and parts of 2, 3, & 4; James Arlen; @myrcurial) Creating and securing a private cloud (Domain 13; Dave Lewis; @gattaca) The entire class is being built around a fictional case study to provide context and structure, especially for the hands-on portions. We are looking at: Set up instances on AWS and/or RackSpace with a basic CMS stack (probably on EC2 free, with Joomla). Set basic instance security. Encrypt cloud data (possibly the free demo of the Trend EBS encryption service). Something with federation/OAuth. Risk/threat modeling exercise. Set up a private cloud (vCloud or Eucalyptus) Keep in mind this is a one-day class so these will be very scripted and quick – there’s only so much we can cover. I will start pushing out some of the module outlines in our Complete feed (our Highlights RSS feed still has everything due to a platform bug – you only need to know that if you visit the site). We can’t put everything out there since this is a commercial class, but here’s your chance to influence the training. Also remember that we are deep into the project already with a very tight deadline to deliver the pilot class at RSA. Thanks Share:

Share:
Read Post

Incite 1/12/2011: Trapped

I enjoy living in the South (of the US). I’m far enough North that we get seasons. But far enough South to not really be subjected to severe winter weather. It’s kind of like porridge in the story of the 3 bears. Living in ATL is just right for me. Usually. In a typical year, we’ll see snow maybe twice. And it will be a dusting, usually gone within an hour. Only once in the 6 years I’ve lived in Atlanta has there been enough snow to even make a snowman – and Frosty it wasn’t. Which is fine by me. But this weekend we got hammered. 6 inches in most places. I know, you rough and tumble Northerners laugh at 6 inches. That’s not enough to even start up your snow blower. I get that. But you are prepared and you have the right equipment to deal with the snow. We don’t. I’ve seen it written that Chicago has 200 snow plows. Atlanta has 8. Seriously. And I live about 30 miles north of Atlanta, so we have zero snow plows. Even if you get a few inches of snow, it’s usually above freezing, so it melts enough to clear the roads and get on with business. Not this time. When it got above freezing, we got frozen rain. And then it got colder, so anything that melted (or rained) then froze on the roads. I’m a good winter driver and I know enough to not mess with ice. I even had to shovel. Thankfully, I didn’t toss my good shovel from up North. It still worked like a charm – though my back, not so much. So basically I’m trapped. And so are the Boss and kids. They canceled school for the past two days, and it’s not clear (given the forecast for more freezing weather) that they will have school at all this week. Thankfully the snow is still novel for them, so they go out and sled down a hill in our back yard in a laundry basket. Yes, a laundry basket. That’s a southern kids’ sled, don’t you know? I’ll give the kids props for creativity. But a week at home with the kids without the ability to go do stuff is going to be hard. For the Boss. I’ll be sequestered in my cave looking busy. Very very busy. OK, I’m not totally trapped. I did escape for an hour this afternoon to brave the slush and other wacky drivers. I had to pick up a prescription and get some bread. The roads were passable, but bad. And to add insult to injury, Starbucks closed about 20 minutes after I got there, so I couldn’t even get much writing done. My routine is all screwed up this week. I know this too shall pass. The snow will melt, the kids will go back to school, and things will return to normal. But to be honest, it can’t pass soon enough. We love the kids. But we also love it when they get on the bus each morning and become their teachers’ problems for 6 hours. -Mike Photo credits: “Snowed in Snowdon” originally uploaded by zalgon Vote for Me. I’ll buy you a beer. OK, I’ll finally come clean. I’m an attention whore. Why else do you think I’d write this drivel every week? Yes, my therapist has plenty of theories. But it seems that some of you think this stuff is entertaining. Well, at least the judges of the Social Security Blogger Awards do. I’m both flattered and excited to once again be nominated in the Most Entertaining Security Blog Category. I actually won the award in 2008, but was crushed like a grape in 2009 by Hoff. And deservedly so. But this year Hoff is thankfully in another category, so my fellow nominees are Jack Daniel’s Uncommon Sense, the Naked Sophos folks, and some Symantec bunker dwellers from the UK. All very entertaining and worthy competition. I’ll reiterate an offer to buy a beer for anyone who votes for me, but there is a catch. You can only collect at the Security Bloggers meet-up at RSA. Seems Shimmy is on to my evil plans. So if you like beer. Or if you like me. Or if you feel sorry for me. Or if you want my Mom to be able to kibbitz with her group of Yentas in Florida about her entertaining blogger son. Help out a brother with a vote. Incite 4 U Brand this: George Hulme argues against the idea that security doesn’t matter to a company’s brand. George can (on rare occasions) be a disagreeable guy, but this one is a bit of a head scratcher. If the measuring stick for is stock price, then George is wrong. There has been no negative effect on stock price from a security breach. George states that companies suffering breaches have greater churn than those that don’t. But evidently not enough to impact their stocks. I did a podcast with Shimmy yesterday and toward the end we discussed this. My point is that clearly breaches cost money, both in terms of the direct costs and the opportunity cost of not doing something more strategic with those resources. Those are real costs. But do they outweigh the additional costs incurred by trying to be secure? That is the zillion dollar question. And there isn’t any data to prove it one way or the other. As Rich always preaches to us, we need to be very careful when we infer causation without specific data. Which I think has happened on both sides of this discussion. – MR Don’t blame the hinge manufacturer if you leave the door open: I get sort of annoyed when people blame someone else for their problems. Take the latest brouhaha over the brand new Mac App Store. It turns out – and you might want to sit down for this one – that if you don’t follow Apple’s guidelines on

Share:
Read Post

Marketing Skills for Security Wonks: Leveraging Elmer FUDd

At the risk of having Rich yell at me again (like he did early last year) because I’m writing too much high-level stuff, let’s get back to a key soft skill of being a security manager. It’s not like we got a lot better at that in 2010, right? I talked about motivating your team earlier this week, so now let’s turn to marketing and sales. Right – you are a security guy/gal, what do you need to know about sales? Well, unless your senior management comes to you with a blank check and a general understanding of how to protect your stuff, you need to map out a security program and sell it to them. If you end up with about 20% of the budget you need every year, and at layoff time you lose 40% of an already understaffed team, guess what? You have a sales problem. And that means you may have to get your Elmer FUDd on. A post by Dave Shackleford got me thinking about FUD (fear, uncertainty, and doubt) from a user context. It’s a constant presence when dealing with vendors, who are always trying to scare their customers into buying something. But end users can leverage FUD as well. Just be careful – it’s a bit like using live exploits. You might get what you want, but in the process take down the entire system. I’ve been talking for years about the need for security managers to focus on communications and leave the firewall rules to the admins. Part of that communication strategy is about creating urgency. Urgency gets things done. Urgency doesn’t allow folks to debate and get into an analysis/paralysis loop. You need urgency. And used correctly, FUD can create urgency. You are probably thinking about how distasteful this whole discussion seems. You can’t stand it when your sales reps try to throw a FUD balloon at you, and now you need to do the same thing? Just hear me out. The deal with using FUD in an end user context is pretty straightforward – it’s really just about telling the truth, the whole truth. And that’s really the difference. The amount of risk most organizations face can be overwhelming, so most security managers downplay it, or run out of time to tell the entire story. What you want to do is explain to senior management, preferably with examples of how it happened to other folks (who look like your company & managers), all the ways you can be compromised. Yes, the list is long. I recommend you do this within the context of a risk assessment and the associated triage plan to fix the most urgent issues. This process is outlined in Steps 2 and 3 of the Pragmatic CSO. You see, if you show them you can get killed 200 ways, but ask for funding to only fix 50, it’s a win win. The reality is even if you had the resources, you couldn’t fix all 200 anyway, and by the time you are done there will be another 200. But that can stay just between us. The senior folks think you are making tough choices to fix the stuff that’s most important and exposed – which you are. So as you hunt for those wascally wabbits each day, don’t be too scared to break out the Elmer FUDd from time to time. Sometimes the end justifies the means. But don’t tell the vendors I said FUD is OK (sometimes). That needs to remain our little secret. Photo credits: “Elmer Fudd” originally uploaded by Joe Shlabotnik Share:

Share:
Read Post

Friday Summary: January 7, 2011

Compliance and security have hit the big time, and I have the proof. Okay: all of us who live, eat, and breathe security already know that compliance is a big deal and a pain in the ass – but it isn’t as if “normal” people ever pay attention, right? Other than CEOs and folks who have to pay for our audits, right? And according to the meme that’s been circulating since I started in the business, no one actually cares about security until they’ve been hit, right? Well, today I was sitting at my favorite local coffee shop when the owner came over to make fun of me for having my Mac and iPad out at the same time. We got to talking about their wireless setup (secure, but he doesn’t like the service) and he mentioned he was thinking of dropping the service and running it off his own router. I gave him some security tips, and he informed me that in no way, shape, or form would he connect his open WiFi to the same connection his payment system is on. Because he has to stay PCI compliant. Heck, he even knew what PCI PA-DSS was and talked about buying a secure, compliant point of sale system! He’s not some closet security geek – just a dude running a successful small business (now in two locations). He’s a friggin’ Level 4 merchant, and still knows about PCI and compliant apps. I feel like kissing the sales guy who must have explained it all to him. And security? He never uses anything except his up-to-date Windows 7 computer to access his bank account. Now can we all shut up about not making a difference? Do you really think I could have had that conversation even a few years ago? One last note: RSA is fast approaching. We (well, @geekgrrl) are working hard on the Securosis Guide to RSA 2011, the Recovery Breakfast announcement will go out soon, we’re cramming to finish the CSA training class, and we’ve locked in an awesome lineup for the RSA e10+ program we are running this year. And then there’s our sekret squirrel project. In other words, please forgive us if we are slow responding to email, phone calls, or beatings over the head. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort quoted in Incident%20response%20plans%20badly%20lacking,%20experts%20say. Kevin Riggins gives us a shout-out and review. Favorite Securosis Posts Mike Rothman: Mr. Cranky Faces Reality. Any time Adrian is cranky, you need to highlight that. I guess he is human after all. Adrian Lane: The Evolving Role of Vulnerability Assessment and Penetration Testing in Web Application Security. David Mortman: Web Application Firewalls Really Work. Rich: BSIMM meets Joe the Programmer. Other Securosis Posts React Faster and Better: Initial Incident Data. Mobile Device Security: Saying no without saying no. Incite 1/5/2011: It’s a Smaller World, after All. HP(en!s) Envy: Dell Buys SecureWorks. Motivational Skills for Security Wonks: 2011 Edition. Mobile Device Security: I can haz your mobile. Coming Soon…. React Faster and Better Chugging along. React Faster and Better: Alerts & Triggers. Favorite Outside Posts Mike Rothman: Quora Essentials for Information Security Professionals. Lenny Z talks about how to use the new new social networking thingy: Quora. I’m a luddite, so maybe I’ll be there in a year or two, but it sounds cool. Adrian Lane: thicknet: starting wars and funny hats. A couple weeks old, but a practical discussion of MinM attacks on Oracle. And Net8 is difficult to decipher. Rich: Slashdot post on how China acquires IP. I suggest the full article linked by Slashdot, but it’s a translation and even the short bits in the post are very revealing. Project Quant Posts NSO Quant: Index of Posts. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Researcher breaks Adobe Flash sandbox security feature. He did not actually break anything, but figured out how to bypass the restriction. Windows 0day in the wild. SourceFire buys Immunet. More perspective on Gawker Hack. Chinese hackers dig into new IE bug, says Google researcher. Breaking GSM With a $15 Phone … Plus Smarts. The Dubai Job: Awesome article in GQ on the assasination. Security risks of PDF. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to mokum von Amsterdam, in response to NSA Assumes Security Is Compromised. One can not keep information secret that is accessable by >10 people over years, period. Mind you, ‘systems’ and ‘networks’ are not limited to the typical IT stuff one might think of but includes the people and processes. Trying to secure it is doomed to fail, so what one needs is to adjust the mindset to reality. Sorry, no spend-more-dollars solution from me… Share:

Share:
Read Post

Mobile Device Security: 5 Tactics to Protect Those Buggers

In this series we’ve tackled the threats these new handheld computers mobile devices present, as well as how we need to deal with folks culturally when they demand access to sensitive corporate information on mobile devices. As we wrap up this short series on mobile device security, let’s jump in and talk about a few things we can do to protect these devices. As we all understand that these mobile devices are really handheld computers, we need to think about the tactics that are successful for securing our more traditional computers. Admittedly, ‘successful’ may be a bit optimistic, but there are still many lessons we can learn from the controls we use to protect laptops. Some of these fall into a traditional security technology bucket, while others tend to be more operational and management oriented. But really, those distinctions are hair-splitting. Things like secure configurations and access policies contribute to the safety of the data on the device, and that’s what’s important. Tactic #1: Good Hygiene I know you hate every time you go to the dentist and see the little sign: Only floss the teeth you want to keep. I certainly do, but it’s true. As much as I hate to admit it, it’s still true. And the same goes for protecting mobile devices. We need to have a strong posture on these devices, in order to have a chance to be secure. These policies won’t make you secure, but without them you have no chance. Strong Passwords: If you have sensitive data on your mobile devices, they need to be password protected. Duh. And the password should be as strong as practical. Not a 40 digit series of random numbers. But something that balances the user’s ability to remember it (and enter it n times per day) against the attackers’ ability to brute force it. And you want to wipe the device after 10 password failures or so. Auto-lock: Along with the password, the device should lock itself after a period of inactivity. Again, finding the right setting is about your users’ threshold for inconvenience, the length of their passwords, and your ability to dictate something secure. 5-10 minutes is usually okay. Data encryption: Make sure the device encrypts data on it. Most mobile devices do this by default, but make sure. Continuous Hygiene With your dentist, doing a good brushing right before your appointment probably won’t going to fool him or her if you haven’t flossed since the last appointment. But unless you are checking constantly whether the mobile device remains in accordance with your configuration policies, you can be fooled. Just because you set up a device correctly doesn’t mean it stays that way. For traditional networks, a technology like Network Access Control (NAC) can be used to check a device when it joins the network. This ensures it has the right patches and right configuration, and has been scanned for malware, etc. You should be doing the same thing for your mobile devices. Upon connecting to your network, you can and should check to make sure nothing is out of compliance with policy. This helps block the user who gets his device from you and promptly jailbreaks it. Or does a hard reset to dump the annoying security controls you put in place. Or the one who turned off the password or auto-lock because it was too hard to deal with. Remember, users aren’t as dumb as we think they are. Well, some aren’t. So some of them will work to get around the security controls. Not maliciously (we hope), but to make things easier. Regardless of the security risks. Part of your job is to make sure they don’t manage it. Tactic 2: Remote Wipe Despite your best efforts, some users will lose their devices. Or their kids will drop them (especially the iDevices). Or they’ll break and be sent in for service. However it happens, the authorized user won’t be in control of their devices, and that introduces risk for you. And of course they won’t tell anyone before sending the device is into the shop, or losing it. So we get a memo asking for a replacement/loaner because they have to access the deal documents in the can. You need the ability to eliminate the data on the device remotely. This doesn’t have to be complicated, right? Authenticate properly and nuke it from orbit. Hopefully your user backed up his/her device, but that’s not your issue. Ultimately if there is sensitive data on the mobile device, you need to be able to wipe it from anywhere in the world. One caveat here is that in order to wipe the device you must be able to connect to it. So if a savvy attacker turns it off, or puts it into airplane mode or something, you won’t be able to wipe it. That’s why having an auto-wipe policy in case of 10 password failures is critical. At some point, someone will try to get into the device, and that’s when you want to be rid of the data. Tactic 3: Lock down Network Access It’s no secret that most public wireless networks are the equivalent of a seedy flea market. There are some legitimate folks there, but most are trying to rip you off. And given the inherent bandwidth limitations of cellular data, most users leverage WiFi whenever and wherever they can. That creates risk for us, who need to protect the data. So what to do? Basically, get a little selective about what networks you allow users to connect to. You can enforce a policy to ensure any WiFi network used offers some kind of encryption (ideally at least WPA2) to avoid snooping the network traffic. Or you can VPN all the devices’ network traffic through your corporate network, so you can apply your web filtering and other protections, with encryption to rebuff sniffers. Unfortunately this isn’t easy to swing in reality. Remember, these devices don’t belong to your organization, so mandating

Share:
Read Post

BSIMM meets Joe the Programmer

I always read Gary McGraw’s research on BSIMM. He posts plenty of very interesting data there, and we generally have so little good intelligence on secure code development that these reports are refreshing. His most recent post with Sammy Migues on Driving Efficiency and Effectiveness in Software Security raises some interesting questions, especially around the use of pen testing. The questions of where and how to best deploy resources are questions every development team has, and I enjoyed his entire analysis of the results of different methods of resource allocation. Still, I have trouble relating to a lot of Gary’s research, as the BSIMM study focused on firms that have resources far in excess of anything I have ever seen. I come from a different world. Yeah, I have programmed at large corporations, but the teams were small and isolated from one another. With the exception of Oracle, budgets for tools and training were just a step above non-existent. Smaller firms I worked for did not send people to training – HR hired someone with the skills we needed and let someone else go. Brutal, but true. So while I love the data Gary provides, it’s so foreign that I have trouble disecting the findings and putting them to practical use. That’s my way of saying it does not help me in my day job. There is a disconnect: I don’t get asked questions about what percentage of the IT budget goes for software security initiatives. That’s both because the organizations I speak with have software development as a separate department than IT; and because the expedniture for security related testing, tools, development manpower, training, and management software are embedded within the development process enough that it’s not easy to differentiate generic development stuff from security. I can’t frame the question of efficiency in the same way Gary and Sammy do. Nobody asks what their governance policy should be. They ask: What tools should I use to track development processes? Within those tools, what metrics are available and meaningful? The entire discussion is a granular, pragmatic set of questions around collecting basic data points. The programmers I speak with don’t bundle SDL touchpoints in this way, and they don’t qualify as balanced. They ask “of design review, code review, pen testing, assessment, and fuzzing – which two do I need most?” 800 developer buckets? 60, heck even 30, BSIMM activities? Not even close. Even applying a capability maturity model to code development is on the fringe. Mainly that’s because the firms/groups I worked in were too small to leverage a model like BSIMM – they would have collapsed under the weight of the process itself. I talk to fewer large fims on a semi-regular basis, and plenty of small programming teams, and using BSIMM never comes up. Now that I am on the other side of the fence as an analyst, and I speak with a wider variety of firms, BSIMM is an IT mindset I don’t encounter with software development teams. So I want to pose this question to the developers out there: Is BSIMM helpful? Has BSIMM altered the way you build secure code? Do you find the maturity model process or the metrics helpful in your situation? Are you able to pull out data relevant to your processes, or are the base assumptions too far out of line with your situation? If you answered ‘Yes’ to any of these questions, were you part of the study? I think the questions being asked are spot on – but they are framed in a context that is inaccessible or irrelevant for the majority of developers. Share:

Share:
Read Post

Incite 1/5/2011: It’s a Smaller World, after All

I’m happy to say the holiday season was pretty eventful for the Boss and her family. Her brother (and his wife) welcomed twin boys into the world right after Xmas. The whole process of creating life still astounds, and the idea of two at a time boggles the mind – even if you’ve been through it. Turns out we were up North when the new guys showed up (a week early), so we got to meet them in person. We live 600 miles apart, so that was an unexpected bonus. It also meant there was no shot at all of us attending the Bris. 8-day-old boys provide a little donation to the gods and everybody eats. It’s a festive occasion (for us – for the babies, not so much) and we hated the economic reality that we couldn’t travel to attend in person. But then over the hills we saw a glimmer of hope. Was it a plane? Nope. 5 tickets are just too much money. A train? Nope. Can’t take a day to go back and forth. It’s video conferencing. Sure, Skype is fun to do a little video conference with the grandparents from time to time. It’s also critical when traveling abroad, unless you like $2,000 phone bills. In this case, video allowed us to be at the Bris, from the comfort of our home office. The kids were off from school, and my brother in law set up his web cam to overlook the ceremony. So we all crouched around the computer and watched the ritual. We got to wave a lot and they did a great job of including us in the ceremony. Of course it wasn’t exactly like being there, but it was a hell of a lot better than seeing a few pictures three days later. When my kids were born, our option to do something similar was a $30,000 video conferencing system. You could fly in on the Concorde for less. And my brother in law would have needed a compatible systems as well. Through the wonders of Moore’s Law and the kindness of the bandwidth gods, now we can be anywhere in the world at any time. Now a Bris is not something you need (or even want) to see via a higher fidelity telepresence type environment. But seeing the entire family gathered, and being able to participate ourselves from Atlanta, was amazing. And that’s why the world is getting to be a smaller place every day. Of course I don’t do much video, because Rich and Adrian know what I look like (pretty as that is) and I’d rather not everybody see my 6-day stubble and bunny slippers (my usual work attire). But the technology is invaluable for connecting with those you like (and perhaps especially those you don’t like), when a phone call seems a bit 2-dimensional. Whether Apple’s FaceTime commercials bring a tear to your eye or not, you can’t disregard the experience. Video conferencing is going to happen, and I saw why on Monday. -Mike Photo credits: “It’s a Small World!” originally uploaded by Thomas Hawk Incite 4 U Pen testing obsolete? Hardly… Val Smith laid out some bait regarding whether pen testing is rapidly becoming obsolete. I guess that depends on how you define pen testing. The traditional unsophisticated run of Core or Metasploit with a bunch of glorified monkeys to check the compliance boxes is actually alive and well. PCI will ensure that for years to come. But that clearly not-so-useful practice will become more automated and cheaper, like every other competitive commodity function. But Val’s point at the end is that pen testing is evolving and needs to provide organizations with “a new type of service which tests their infrastructures and security postures in a different way”. That I agree with. There will always be a role for sophisticated white hats to try to break stuff. Maybe we stop calling that pen testing, which is fine by me. As long as you keep trying to break your stuff, call it whatever you want. – MR Don’t hack me, bro! Mocana made news this week when they announced they hacked into Internet TV set top boxes. I don’t think anyone is really surprised by this. The entire set top box / TV as Internet market is the poster boy for feature advancement land grab, with companies furiously vying for a share of Internet TV audiences. But really, who wants to worry about security when all you want is frackin’ TV! Can’t we all just get along? Well, no, not really. I am willing to bet that any security measure beyond a password and some rudimentary session-based encryption never came up in the product design meetings. “Winning the market” is about features, and the winner can clean up the mess later. Or at least that is the attitude I see. But these devices are stripped-down computers. And they use standard networking protocols. In most cases with reduced-footprint variants of standard operating systems. And it’s now attached to your home network. To me, Mocana is just pointing out the obvious, which is that these freakin’ things lack basic security. And it probably did not take anything more than a MitM attack to intercept the credit card, but I am willing to bet they are susceptible to injection as well. Granted, Mocana sells security products to help developers and designers secure these devices, so their PR is self-serving (of course), but this whole segment needs a wake-up call. – AL The name of the game? Reduce scope! I did a customer advisory board meeting for a client last year, and one of the attendees mentioned his specific goal was to reduce his PCI in-scope devices to zero. Right, he wanted to transition all protected data (and the associated processes) to external service providers and make PCI their problem. Certainly a noble goal, but not sure how realistic that is for most organizations. Clearly the trend is towards higher segmentation

Share:
Read Post

Mobile Device Security: Saying no without saying no

As we discussed in our first Mobile Device Security post (I can haz your mobile), supporting smartphones isn’t really an choice. You aren’t going to tell your CEO or any other exec 5-6 pay grades above you that they can’t use their iPad to access the deal documents on that multi-billion dollar acquisition. You know it’s much easier to read an iPad on the can, than to lug the laptop around when taking care of business, right? If you are like most security professionals, your first instinct is to blurt out a resounding no, when presented with a request to connect an Android phone to your network. But your instincts are wrong. That wasn’t a question. It was an order – or soon will be. So your best bet is to practice the deep breathing exercises your meditation guru suggested. Once you’ve gotten your pulse back to a manageable 130, then you can and must have a constructive discussion about what resources are needed on the smartphone and why. User Profiles Are Your Friend The (sometimes fatal) mistake we see most often is treating every user as equivalent to every other user with the same device. This leads to providing the same level of access, regardless of who the user is. Allow us to suggest an alternative: profile users based on what they need to get, define 3-4 user types, and build your policies based on what they need, not what devices they have. For instance, you might have three user types: Executive: These folks can crush you with a stroke of their pen. Okay – a pen is old school. How about a click of their mouse? These people get what they want because saying no is not an option. They should be configured for email and document access, with a VPN client so they can access the corporate network (from the can). Connected Users: There will be another group of users who might have compromising pictures of the executives. Or maybe they actually provide tangible value to your organization. Either way, these folks need access, but probably not to everything. Design the policy to give them only what they need, and nothing more. Everyone else: If a person doesn’t fit into either of the other two buckets, then you give them access, but not enough that they can hurt themselves (or you). That means email, but probably not VPN access to the corporate network. These buckets are just examples – you’ll need to go through the use cases for each type of job function and see what levels of access make sense for your organization. Yes, but… As we mentioned above, your first instinct is likely to say ‘no’ when asked to support smartphones. But let’s tune the verbiage a bit and say “Yes, but” instead. After this easy mantra, go into all the reasons why it’s a bad idea for the user to have smartphone access to the organization’s sensitive stuff. You aren’t telling them no, but you are trying to convince them it’s a bad idea. But let’s acknowledge the truth: you’ll lose and the requestor will get access. The goal of this exercise isn’t necessarily to win the argument (though being able to block someone’s every so often access is good for your self-esteem), but instead to get folks put into the right user profile buckets. Everyone wants access to everything. But we know that’s a bad idea, so success is really more about how many users (as a percentage of all smartphone users) have limited access. That number will vary based on organization, but if it approaches 0% you need to practice “yes, but” a lot more. Cover Your Hind Section The last suggestion we’ll make relative to process is to ensure that you have documented the risks of supporting these devices. It’s critical to understand that our job as security professionals isn’t to stop business from happening – it’s to provide information to the decision makers so they can make rational, educated decisions. That means you need to inform them of the risks of whatever action they are going to take and push them to acknowledge the risk. If you fail to do this, you’ll be the one thrown out of the car at high speed when something goes wrong. Without ensuring clearly, and in writing, that everyone understands all the things that can go wrong by taking a particular action; you’ll end up in the proverbial creek without a paddle. Acknowledge that you won’t like all the decisions. Your job is to protect information and that requires reducing risk. Every company needs to take risks to continue to execute on their business plans. These two goals are diametrically opposed, but at the end of the day, it’s not our job to decide what risks make sense for your business. It’s our job to make sure everyone is clear on what those risks are, and enforce the decisions. As helpful as it is to put users in specific profiles, there are still a number of things you can do technically to protect your organization from the iPocalypse. As we wrap up this series, we’ll go through a few and provide ideas for how to protect your smartphone wielding employees from themselves. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.