Securosis

Research

Payment Trends and Security Ramifications

I write a lot about payment security. Mostly brief snippets embedded in our weekly Incite, but it’s a topic I follow very closely and remain deeply interested in. Early in my career, I developed electronic wallet and payment gateway software for Internet commerce sites, and application embedded payment options. In have been closely following the technical evolution of this market for over 15 years – back in the days of CyberCash, Paymatech, and JECF. But unlike many of the articles I write, payment security affects more than just IT users – it impacts pretty much everyone. And now is a very good time to start paying attention to the payment space because we are witnessing more changes, coming faster than ever. Most of the changes are directly attributable to disruptive nature of mobile devices: they not only offer a convenient new medium for payment, but they also threaten to reduce revenue and brand awareness of the major payment players. So issuing banks, payment processors, card brands, and merchants are all reacting in their own ways. The following are some highlights of trends I have been tracking: 1) Mobile Wallets: A mobile wallet is basically a payment app that authorizes payments from your phone. The app interacts with the point-of-sale terminal in one of several ways, including WiFi, images readers, and text message exchanges. While the technical approaches vary, payment is cleared without providing the merchant with a physical credit card, or even revealing a credit card or bank account number. Many credit card companies look on wallet apps as a way to ‘accelerate’ commerce and reduce consumer reticence to spend money – as credit cards did in the 70s. The flip side is that many card brands are scared by all this. Some are worried about losing their brand visibility – you pay with your phone rather than their branded credit card, and your bill might be from your telephone company without a Visa or Mastercard logo or identification. Customers can choose a payment application and provider, so churn can increase and customer ‘loyalty’ is reduced. Furthermore, the app need not use a credit card al all – like a debit card it could draw funds directly from a bank account. When you think about it, as a consumer, do you really care if it is Visa or Mastercard or iTunes or PayPal, so long as payment is accepted and you get whatever you’re paying for? Sure, you may look for the Visa/Mastercard sticker on the register or door today, but when you and the merchant are both connected to the Internet, do you really care how the merchant processes your payment, so long as they accept your ‘card’ and your risk is no greater than today? When you buy something using PayPal you draw funds from your bank account, from your credit card, or from your PayPal balance – but you are dealing with PayPal, and your bank or credit card provider is barely visible in the transaction. The threat of diminished revenue and diminished brand stickiness – on top of a global reduction in credit card use – is pushing card brands and payment processors into this market as fast as they can go. From what I see, security is taking a back seat to market share. Most of the wallets I review are designed to work now, minimizing software and hardware PoS changes to ensure near-term availability. Basic passwords and phone-presence validations will be in place, but these systems are designed with a security-second mentality. And just like the Chip & Pin systems I will discuss in a moment, mobile wallets could to be more secure than physical cards or reading numbers over the phone, but the payment schemes I have reviewed has are all vulnerable to specific threats – which might compromise the transaction, phone, or wallet app. 2) Smart Cards: These are the Chip & Pin – or Integrated Circuit – systems used widely in Europe. The technical standards are specified by the Europay-Mastercard-Visa (EMV) consortium. Merchants are being encouraged to switch to Chip & Pin with promises of reduced auditing requirements, contrasted against the threat of growing credit card fraud – but merchants know card cloning has been a problem for decades and it has not been enough to get them to endorse smart cards. I recently discussed the issues surrounding in Say Hello to Chip and Pin, but I will recap here briefly. Smart cards are really about three things: 1) new revenue opportunities provided by multi-app cards for affinity group sales, 2) moving liability away from the processor and merchant and onto the consumer, and 3) compatibility with Chip & Pin hardware and software systems used elsewhere in the world. More revenue, less risk, and standardized hardware for multiple markets reduce costs through competition. And a merchant that invests in smart card PoS and register software, is less likely to invest in payment systems that support mobile phones – creating PoS vendor and merchant lock-in. Once again, smart cards are marketed as advanced security – after all it is harder to clone a smart card – despite ample proof that Chip & Pin is hackable. This is about revenue and brand: making more and keeping more. Incremental security benefits are just gravy for the parties behind Chip & Pin. 3) Debit Cards: Mobile wallets may change the debit card landscape. If small cash transactions are facilitated through mobile wallet payments, the need for pocket cash diminishes, as does the need to carry a branded debit card! This is important because, since the Fed cut debit card fees in half, many banks have been looking to make up lost revenue by charging debit card ‘privilege’ fees above and beyond ATM fees. Wells Fargo, for example, makes around 45% of their revenue on fees; this number will shrink under the new law – potentially by billions, across the entire industry. Charging $3 a month for debit card usage will push consumers to look for

Share:
Read Post

Incite 9/14/2011: Mike and the Terrible, Horrible, No Good, Very Bad Day

I have been looking forward to this day… well, since the Falcons’ season was abruptly cut short by a rampaging Pack last January. We had a little teaser with that great game Thursday, and although both teams couldn’t lose, having the Saints drop a tough one was pretty okay. I weathered a tumultuous lockout during the offseason. Even a bumpy pre-season for both my teams (NY Giants and ATL Falcons) couldn’t deter my optimism. Pro football started Sunday and I was fired up. The weekend was going swimmingly. I was able to survive a weekend with the Boss away with her girlfriends. With a little help from our friends, I was able to successfully get the Boy to his football practice, XX2 to her softball game, and both girls to dance practice Saturday. I got to watch a bunch of college football (including that crazy Michigan/Notre Dame game). The kids woke Sunday in a good mood when I got them ready for Sunday school. I got some work done and then got ready to watch the games at a friend’s house. Perfect. Until they started playing the games, that is. The Falcons got crushed. Ouch. They looked horrible, and after all the build-up and expectations it was rather crushing. It was terrible for sure. I do this knock-out pool, where you pick one team a week and if they win, you move on. If they lose, you are out. You can’t pick the same team twice, and it’s a lot of fun. But I’ve shown my inability to get even the easiest games right – I have been knocked out in the first week 2 of the last 3 years. Of course, I picked Cleveland because Cincinnati is just terrible, with a new QB and all. Of course Cleveland lost and I’m out. Yeah, that’s horrible. Just horrible. But things couldn’t get worse, right? The Giants were in Washington and they’ve owned the Redskins for years. Until today. The Giants have a ton of injuries, especially on defense. And it showed. They couldn’t stop a high school team. Their offense wasn’t much better. Man, tough day. Looking at the schedule, both teams dropping their games this week will hurt. Yup, that’s a no good day. And to add insult to injury, as I’m mumbling to myself in the corner, the Boy comes downstairs with his Redskins jersey on. Just to screw with me. Seriously. I know I shouldn’t let an 8-year-old get under my skin, especially the day before his birthday, but I wasn’t happy. Maybe I’ll laugh about it by the time you read this on Wednesday, but while I’m writing this on Sunday night, not so much. I sent him upstairs with a simple choice. He can change his shirt or I could insert a few metatarsals into his posterior region. It’s very bad when I can’t even handle a little chiding from my kids. It was a terrible, horrible, no good, very bad day. But putting everything in context, it wasn’t that bad. I’ve got my health. I do what I love. My biggest problems are about getting everything done. Those are good problems to have. An embarrassment of good fortune, and I’ll take it. Especially given how many around the world were mourning the loss of not only loved ones, but their freedom, as we remember the 9/11 attacks. -Mike Photo credits: “bad day” originally uploaded by BillRhodesPhoto Incite 4 U Design for FAIL: Part of the mantra of most security folks is to think like an attacker. You need to understand your adversary’s mindset to be able to defend against their attacks. There is some truth to that. But do you wonder why more security folks and technology product vendors don’t do the same level of diligence when designing their products. Mostly because it’s expensive, and it’s hard to justify changing things (especially the user experience) based on an attack that may or may not happen. Lenny Z makes a good point in his post Design Information Security With Failure in Mind, where he advocates taking lessons from ship builders. I’d put airplane manufacturers in the same boat. They intentionally push the limits, because people die if a cascade of failures sinks a ship. Do your folks do that with IT systems? With security? If not, you probably should. It’s not about protecting against a Black Swan, but eliminating as much surprise as we can. That’s what we need to do. – MR Jackass punks: No, this isn’t a diatribe against Lulzsec. Imagine you’re sitting at home and you start getting weird emails from some self-proclaimed degenerate who starts talking about showing up at your house. And you get emails from motels this person stayed at, holding you responsible for damages. And the person was on the lam from the law. Heck, they even have their own MySpace page. MySpace? Okay, that’s probably the first clue this is a scam, or a Toyota marketing campaign gone horribly wrong. Toyota set up a site where people could enter the personal details of their friends (or… anyone), who would then be subject to a serious Ashton Kutcher-style punking. Talk about insanely stupid. As much as we bitch about security marketing, this definitely takes the cake. While I don’t think $10M in damages is reasonable, Toyota certainly earned the lawsuit. – RM Pay-nablement: It’s easy to do online payment. The trick is in doing it securely, and I am not so sure that the ‘Buyster’ payment system has done anything novel for security. Buyster links your phone number to a bank account. To use the service you need to enter your phone number and a password – what could go wrong? In return you get a payment token via a message, which you can then pass to a merchant. This model keeps the credit card number off the merchant site, but they would need to modify their systems to accept the token and link to the Buyster payment

Share:
Read Post

Fact-Based Network Security: In Action

As we wrap up our series on Fact-Based Network Security, let’s run through a simple scenario to illustrate the concepts. Remember, the idea is to figure out what on the list will provide the biggest impact for your organization, and then do it. We make trade-offs every day. Some things get done, others don’t. That’s the reality for everyone, so don’t feel bad that you can’t get everything done. Ever. But the difference between a successful security practitioner, and someone looking for a job, is that success is about consistently choosing the right things to get done. Some folks intuitively know what’s important and seem to focus on those things. They exist – I’ve met them. They are rock stars, but when you try to analyze what they do, there isn’t really a pattern. They just know. Sorry, but you probably aren’t one of those folks. So you need a system – you know, a replicable process – to make those decisions. You may not have finely tuned intuition, but you can overcome that by consistently and somewhat ruthlessly getting the most important things done. Scenario: WidgetCo and the Persistent Attacker In our little story, you work for a manufacturer and your company makes widgets. They are valuable widgets, and represent intellectual property that most nations of the world (friend and foe alike) would love to get their hands on. So you know that your organization is a target. Your management gets it – they have a well-segmented network, with firewalls blocking access to the perimeter and another series of enclaves protecting R&D and other sensitive areas. You have IPS on those sensitive segments, as well as some full packet capture gear. Yes, you have a SIEM as well, but you are revisiting that selection. That’s another story for another day. Your users are reasonably sophisticated, but human. You run the security operations team, meaning that your folks do most of the management and configuration of security devices. Knowing that you are a target means you need to assume attackers have compromised your network. But your tight egress filtering hasn’t shown any significant exfiltration. Your team’s task list seems infinite. There are a myriad of ports to open and close on the firewalls to support collaboration with specific business partners. Your company’s sales team needs access to a new logistical application so they can update customers on their shipments of widgets. And of course you are a large customer of a certain flavor of two-factor authentication token for all those reps. Your boss lights up your phone almost daily because she gets a lot of pressure to support those business partners. Your VP of Engineering is doing some cool stuff with a pretty famous research institution in the Northeast. The sales guys are on-site and don’t know what to tell the customer. And your egress filters just blocked an outbound attempt coming from the finance network, maybe due to the 2FA breach. What do you do? No one likes to be told no, but you can’t get everything done. How do you choose? Get back to the risks If you think back to how we define risk, it’s pretty straightforward. Which assets are most important? Clearly it’s the R&D information, which you know is the target of persistent attackers. Sure, customer information is important (to them) and finance information would make some hedge fund manager another billion or two, but it would be bad if the designs for the next-generation widget ended up in the hands of a certain nation-state. And when you think about the outcomes that are important to your business, protecting the company’s IP is the first and highest priority. It supports your billion-dollar valuation, and senior management doesn’t like to screw around with it. Thinking about the metrics that underlie various outcomes, you need to focus on indicators of compromise on those most sensitive networks. So gather configuration data and monitor the logs of those servers. Just to be sure (and to be ready if something goes south) you’ll also capture traffic on those networks, so you can React Faster and Better if and when an alert fires. It’s also a good idea to pay attention to the network topology and monitor for potential exposures, usually opened by a faulty firewall change or some other change error. Your operational system gathers this data on an ongoing basis, so when alerts fire you can jump into action. Saying No In our scenario, the R&D networks are most critical, pure and simple. So you task your operations team to provide access to the research institution as the top priority. Of course, not full unfettered access, but access to a new enclave where the researchers will collaborate. After your team makes the changes, you do a regression analysis, to make sure you didn’t open up any holes, using your network security configuration management tool. No alerts fired and the report came back clean. So you are done at that point, right? We don’t think so. Given the importance of this network, you keep a subset of the ops team with their eyes on the monitors collecting server logs, IDS, and full packet capture data. You have also tightened the egress filters just in case. Sure some folks get grumpy when they are blocked, but you can’t take any chances. Without a baseline of the new traffic dynamics, and without a better feel for the log data, it’s hard to know what is normal and what could be a problem. Admittedly this decision makes the VP of Sales unhappy because his folks can’t get access to the logistical information. They’re forced to have a support team in HQ pull a report and email it to the reps’ devices. It’s horribly inefficient, as the VP keeps telling you. But that’s not all. You also haven’t been able to fully investigate the potential issue on the financial network, although you did install a full packet capture device on that network to start

Share:
Read Post

Recently on the Heavy Feed

Since we post most of the content for our blog series on the Heavy Feed (get it via the web or RSS), every so often we like to post links to our latest missives on the main feed. Within the next 10 days we’ll be wrapping both our Fact-based Network Security and Security Management 2.0 series. As always, we love feedback, discussion, dissension and the occasional troll to add comments, so fire away. We look forward to your participation. Fact-based Network Security Metrics and the Pursuit of Prioritization Defining ‘Risk’ Outcomes and Operational Data Operationalizing the Facts Compliance Benefits Security Management 2.0: Is it time to replace your SIEM? Time to Replace Your SIEM? (new series) Platform Evolution Revisiting Requirements Platform Evaluation, Part 1 Platform Evaluation, Part 2 Vendor Evaluation – Culling the Short List Vendor Evaluation – Driving the PoC Share:

Share:
Read Post

Security Management 2.0: Making the Decision

It’s time – you are ready. You have done the work, including revisiting your requirements, evaluating your current platform in terms of your current and emerging requirements, assessing new vendors/platforms to develop a short list and run a comprehensive proof of concept. Now it’s time to make the call. We know this is an important decision – we are here because your first attempt at this project wasn’t as successful as it needed to be. So let’s break down the decision to ensure you can make a good recommendation and feel comfortable with it. That’s actually a good point to discuss. The output of our Security Management 2.0 process is not really a decision – it’s more of a recommendation. That’s the reality – the final decision will likely be made in the executive suite. That’s why we have focused so much on gathering data (quantitative where possible) – you will need to defend your recommendation until the purchase order is signed. And probably more afterwards. We won’t mince words. This decision generally isn’t about the facts – especially since there is an incumbent in play, which is likely part of a big company that may have important relationships with heavies in your shop. So you need your ducks in a row and a compelling argument for any change. But that’s still only part of the decision process. In many cases, the (perceived) failure of your existing SIEM is self-inflicted. So we also need to evaluate and explain the causes of the failed project, with assurance that they will be addressed and avoided this time. If not, your successor will be in the same boat in another 2-3 years. So before you put your neck on the chopping block and advocate for a change (if that’s what you decide), do some deep internal analysis as well. Introspection The first thing is to make sure you really re-examined the existing platform in terms of the original goals. Did your original goals adequately map your needs at the time, or was there stuff you did not expect? How have your goals changed over time? Be honest! This is not the time to let your ego get in the way of doing what’s right, and you need a hard and fresh look at the decision to ensure you don’t repeat previous mistakes. Did you kick off this process because you were pissed at the original vendor? Or because they got bought and seemed to forget about the platform? Do you know what it will take to get the incumbent to where it needs to be – or whether that is even possible? Is it about throwing professional services at the issues? Is there a fundamental technology problem? Remember, there are no right or wrong answers here, but the truth will become clear when you need to sell this to management. Some of you may be worried that management will look at the need for replacement as ‘your fault’ for choosing the incumbent, so make sure you have answers to these questions and that you aren’t falling into a self-delusion trap. You need your story straight and your motivations clear. Did you assess the issues critically the first time around? If it was a skills issue, have you addressed it? Can your folks build and maintain the platform moving forward? Or are you looking at a managed service to take that concern off the table? If it was a resource problem, do you now have enough staff for proper care and feeding? Yes, the new generation of platforms requires less expertise to keep operational, but don’t be naive – no matter what any sales rep says, you cannot simply set and forget them. Whatever you pick will require expertise to deploy, manage, tune, and analyze reports. These platforms are not self-aware – not by a long shot. The last thing you want to do is set yourself up for failure, so make sure you ask the right questions ahead of time and be honest about the answers. Expectations The next main aspect of the decision is reconciling your expectations with reality. Revisiting requirements provides information on what you need the security management platform to do. You should be able to prioritize the specific use cases (compliance, security, forensics, operations), and have a pretty good feeling about whether the new platform or incumbent will be able to meet your expectations. Remember, not everything is Priority #1, so pick your top three must-have items, and prioritize the requirements. If you are enamored with some new features of the challenger(s), will your organization be able to leverage them? Firing off alerts faster may not be helpful if your team takes a week to investigate any issues, or cannot keep up with the increased demand. The new platform’s ability to look at application and database traffic doesn’t matter if the developers won’t help you understand normal behavior to build the rule set. Fancy network flow analysis can be a productivity sink if your DNS and directory infrastructure is a mess and you can’t reliably map IP to user ID. Or does your existing product have too many features? Yes, it does happen that some organizations simply cannot take advantage of (or even handle) complex multi-variate correlation across the enterprise. Do you need to aggregate logs because organizational politics, or your team’s resources or skill set, prevent you from getting the job done? This might be a good reason to outsource or use a managed service. There isn’t a right or a wrong answer here, only the answer. And not being honest about that answer will land you in the hotseat again. If you kickstarted this effort because the existing product missed something and it resulted in a breach, can you honestly say the new thing would (not ‘might’) detect that attack? We have certainly seen high profile breaches result in tossing the old and bringing in the new (someone has to pay, after all), but make sure you

Share:
Read Post

Friday Summary: September 9, 2011

I suppose that, all things considered, I’m a pretty nice guy. I tip well, stop my car so people can cross the street, and always put my laptop bag under the seat in front of me, instead of taking up valuable overhead luggage space. While I have had plenty of jobs that required the use of physical force over the years, I always made sure to keep my professional detachment and use the minimum amount necessary. (Okay, that’s to keep my ass out of jail as much as anything else, but still…). And animals? I’m a total sucker for them. I don’t mean in an inappropriate way, but I think they are just so darn cute. We even donate a bunch to local shelters and the Phoenix Zoo. Heck, all our cats are basically rescues… one of which randomly showed up in a relative’s yard during a BBQ, severely injured, and which we nursed back to health and kept. Which is why my current murderous rampage against the birds crapping on our patio is completely out of character. We like birds. We even used to fill a bird feeder in the yard. Then all our trees grew out, and it seems we have the best shade in the neighborhood. On any given day, once the temperature tops 100 or so, our back patio is covered with dozens of birds doing nothing more than standing in the shade and crapping. And you know what birds eat, don’t you? Berries. Lots and lots of berries. Think they digest it all? Think again. Our patio is stained so badly we will never be able to get it clean. How do I know? I paid someone to power spray and hand scrub it with the kinds of chemicals banned from Fukushima – all to no avail. Not even with the special stuff I smuggled across the border from Mexico. They’ve even hit my grill. The bastards. I’ve tried all sorts of things to keep them away, but I suspect I’ll need to build out something using an Arduino and chainsaw by next summer. This year is a loss – 2 weeks after the big cleaning, even with me spraying it down every few days, out patio is unusable. I haven’t killed them yet. To be honest I don’t think that will work – more likely it would just land me on the local news. But I do grill a lot more chicken and turkey out there. Oh yeah, smell the sweet smell of superior birds roasting in agony. Hey… did you hear some dudes named DigiNotar got hacked? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR article on DAM. Adrian quoted on dangers to law enforcement from the recent hack. My Spanish is good, no? Adrian’s DR article on Fraud Detection and DAM. Favorite Securosis Posts Adrian Lane: Security Management 2.0: Vendor Evaluation. Mike’s pushing the envelope here, but this is the only way to figure out how the product really works. Mike Rothman & David Mortman: Data Security Lifecycle 2.0. With this cloud stuff, our underlying computing foundation is changing. This post assembles a lot of the latest and greatest about how to protect the data. Other Securosis Posts Speaking at OWASP: September 22 and 23. Incite 9/7/2011: Decisions, Decisions. Security Management 2.0: Vendor Evaluation – Culling the Short List. The New Path of Least Resistance. Making Bets. Favorite Outside Posts Gunnar: Do we know how to make software? David Mortman: Quick Blip: Hoff In The Cube at VMworld 2011: On VMware Security. Mike Rothman: The Good, Bad, and Ugly of Technical Acquisitions. Not sure what Amrit is doing now, besides writing great summaries of what happens when Big Company X buys small start-up Y. Adrian Lane: Don’t Hate The ‘Playas’ – Hate The Game. My fav this week is Mike’s Dark Reading post – it gets to the heart of the issue. Pepper: Protecting a Laptop from Simple and Sophisticated Attacks. Mike clearly thought hard about risks, and took some very unusual steps to protect them as well as he could manage. Rich: OS X won’t let you properly remove bad DigiNotar certificates. I know I need to write this up, but being sick has gotten in the way. Apple really needs to address this – for PR reasons as much as for user security. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Top News and Posts Copyright Troll Righthaven Goes on Life Support. Die, troll, die! Star Wars Fans Get Pwned. Fraudulent Google credential found in the wild. Evidence of Infected SCADA Systems Washes Up in Support Forums. VMware: The Console Blog: VMware Acquires PacketMotion. Don Norman: Google doesn’t get people, it sells them. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Russ, in response to Incite 9/7/2011: Decisions, Decisions. Re Please Stop! Dear Adrian, While I believe one of the useful roles Securosis can play in the industry is to help turn down the hype on over-blown issues, in this particular case I’m not sure I agree with your conclusion. I spent a career in aviation safety, and found that what the average line pilot was talking about every day had nowhere near the amount of aviation safety content we as aviation safety advocates thought to be adequate (an example would be the extraneous cockpit conversation prior to the Colgan Air Flight 3407 crash in Buffalo). Could it be that the fact APTs is not brought up in your daily conversations with firms could be an indication of how far we have to go in creating a

Share:
Read Post

Security Management 2.0: Vendor Evaluation – Driving the PoC

As we discussed in the last post, when considering new security management platforms, it’s critical to cull your short list based on your requirements, and to then move into the next step of the evaluation process – the Proof of Concept (PoC). Our PoC process is somewhat controversial – mostly because vendors hate it. Why? Because it’s about you and your needs, not them and their product. But you are the buyer, right? Always remember that. Most SIEM vendors want to push you through a 3-5 day eval of their technology on their terms, with their guy driving. You already have a product in place so you know the drill. You defined a few use cases important to you, and then the vendor (and their SE) stood the product up and ran through those use cases. They brought in a defined set of activities for each day, and you ended the test with a good idea of how their technology works, right? Actually, wrong. The vendor PoC process is built to highlight their product strengths and hide their weaknesses. We know this from first hand experience – we have built them for vendors in our past roles. Your objective must be to work through your paces, not theirs. To find the warts now – not when you are responding to an incident. It’s wacky that some vendors get scared by a more open PoC process, but their goal is to win the deal, and they put a lot of sweat into scripting their process so it goes smoothly for everyone involved. We hate to say it, but smooth sailing is not the point! The vendor will always say “We can do that!” – it’s your job to find out how well – or how awkwardly. So set up evaluation criteria based on your requirements and use cases. Your criteria don’t need to be complicated. Your requirements should spell out the key capabilities you need, and then plan to further evaluate each challenger based on intangibles such as set-up/configuration, change management, customization, user experience/ease of use, etc. Before you start, have your team assess your current platform as a basis for comparison. As you start the PoC, we recommend you invest in screen capture technology. It’s hard to remember what these tools did and how they did it later – especially after you’ve seen a few of them work through the same procedures. So capture as much video as you can of the user experience – it will come in very handy when you need to make a decision. We’ll discuss that in the next post. Without further ado, let’s jump into the PoC. Stand it up, for reals One of the advantages of testing security management products is that you can actually monitor production systems without worrying about blowing them up, taking them down, or adversely impacting anything. So we recommend you do just that. Plan to pull data from your firewalls, your IDS/IPS systems, and your key servers. Not all devices, of course, but enough to get a feel for how you need to set up the collectors. You will also want to configure a custom data source or two and integrate with your directory store to see how that works. Actually do a configuration and bootstrap the system in your environment. Keep in mind that the PoC is a great time to get some professional services help – gratis. This is part of the sales process for the vendors, so if you want to model out a targeted attack and then enumerate the rules in the system, have the SE teach you how to do it yourself. Then model out another attack and build the rules yourself, without help. The key is to learn how to run the system and to get comfortable – if you do switch you will be living with your choice for a long time. Focus on visualization, your view into the system. Configure some dashboards and see the results. Mess around with the reports a bit. Tighten the thresholds of the alerts. Does the notification system work? Will the alerts be survivable at production levels for years? Is the information useful? These are all things you need to do as part of kicking each challenger’s tires. If compliance is your key requirement use PCI as an example. Start pulling data from your protected network segment. Pump that data through the PCI reporting process. Is the data correct and useful for everybody with an interest? Are the reports comprehensive? Will you need to customize the report for any reason? You need to answer this kind of questions during the PoC. Run a Red Team Run a simulated attack against yourself. We know actually attacking production systems would make you very unpopular with the ops folks, so set up a lab environment. But otherwise, you want this situation to be as realistic as possible. Have attackers breach test systems with attack tools. Have your defenders try to figure out what is going on, as it’s happening. Does the system alert as it should? Will you need to heavily customize the rule set? Can you identify the nature of the attack quickly? Does their super-duper forensic drill-down give you the view you need? The clock is ticking, so how easy is it to use the system to search for clues? Obviously this isn’t a real incident situation, so you’ll take some editorial liberties, and that’s fine. You want a feel for how the system performs in near-real-time. If an attacker is in your systems, will you find them? In time to stop or catch them? Once you know they are there, can you tell what they are doing? A Red Team PoC will help you determine that. Do a Post-Mortem Once you are done with the Red Team exercise, you should have a bunch of data that will make for a nice forensic investigation of what the attack team did, and perhaps what the defense team

Share:
Read Post

Speaking at OWASP: September 22 and 23

Gunnar Peterson and I will be presenting at OWASP September 20-23rd. OWASP AppSec USA will be at the Minneapolis Convention center in – you guessed it – Minneapolis, Minnesota. This year’s theme is “Your life is in the cloud”, so there are plenty of talks on mobile app security and how to weave security into your cloud environment. Gunnar is presenting on Mobile Web Services, discussing mobile application vulnerabilities in the web services layer. I’ll be presenting CloudSec 12-Step, a look at foundational security precautions developers need to consider when building and deploying cloud applications. They have scheduled many other great talks as well. And personally, I am willing to bet autumn weather in Minnesota will be awesome! Okay, perhaps my perspective is skewed – Arizona just set a record for the hottest August in history – some 33 days this summer over 110 degrees – but regardless, Minnesota should be very nice. Come by and check out the presentations. As always, we look forward to seeing friends – shoot us an email if you want to meet up that week. Share:

Share:
Read Post

Incite 9/7/2011: Decisions, Decisions

Making decisions is very hard for most people. Not for me. The Boss and I constantly discuss a single issue over and over again as she debates all aspects of a big decision. I try to be patient, but patience is, uh, not my forte. I know it’s her process and to rush that usually lands me a spot in the doghouse, but it’s still hard to understand. Decisions are easy for me. I do the work, look at the upside and downside, and make the call. Next. I don’t look back either. When I make a decision, I’m pretty confident it’s the right thing to do at that point in time. That’s the key. Any decision any of us make at any time is presumably the best decision right then. 10 minutes or 10 years from now things will have changed. Things always change. The question is how much. Sometimes you’ll find your decisions are wrong. Actually, often your decisions are wrong. Yeah, it’s that human thing. I’ve been known to weigh intuition higher than data in some decisions. Especially relative to my career choices. If it felt right, whatever that means, I would go for it. And I’ve been wrong in those choices, a lot. But I guess I come from the school that says it’s better to do stuff and screw up, than to not do anything – stuck in a cycle of analysis paralysis. I’m sure I’ll have regrets at some point, but it won’t be because I couldn’t make a decision. It’s worth mentioning that I’m not opposed to revisiting a decision, but only if something has changed that affects my underlying assumptions. Lots of folks stew over a decision, poring over the same data over and over again, in an endless cycle of angst and second guessing. If the data doesn’t change, neither should the decision. But these folks figure that if they question themselves constantly for long enough, the decision will become easy. But often, they never achieve peace of mind. Gosh, that has to be hard. I pay a lot more attention to the downside of any decision. In most cases, the worst case scenario is you upset someone or waste time and/or money. Obviously I want to avoid those outcomes where possible, but those are manageable downsides for me. So I don’t obsess over decisions. I make the decision and I move on. Second guessing isn’t productive. Part of life is taking risks and adapting as needed. And cleaning up the inevitable mess when you are wrong. I’m okay with that. -Mike Photo credit: “Lose your sleep before your decision, not after it” originally uploaded by Scott McLeod Incite 4 U Liar, liar, pants on fire: Any time I catch my kids telling me less than the truth, I break into the “Liar, liar” refrain over and over again. Yes, I look stupid, but they hate it even more, so it’s worth doing. One of the (former) Anonymous folks pretty much pinpoints the fundamental skill set of social engineering – lying. Okay, there is grey around lies, but ultimately that’s what it is. Does that make the ability to defend against lies any less important? Of course not. Nor am I judging folks who practice social engineering daily and professionally. But if it walks and quacks like a duck, you might as well call it a duck. – MR Misplaced confidence: There will be a lot written over the next weeks and months over the hack of the Certificate Authority DigiNotar, including a post I’m working on. But if you want to quickly learn a key lesson, check out these highlights from the investigation report – thanks to Ira Victor and the SANS forensics blog. No logging. Flat network. Unpatched Internet-facing systems. Total security fundamentals FAIL. Even better, they kept the breach hidden for a month. The breach probably happened many months earlier than their claimed date. Keep in mind this was a security infrastructure company. You know, the folks who are supposed to be providing a measure of trust on the Internet, and helping others secure themselves. Talk about making most of the mistakes in the book! And BTW – as I’ve said before I know for a fact other security companies have been breached in recent years and failed to disclose. How’s that for boosting consumer confidence? – RM They stole what?: When it come to breach notification laws, California has been at the forefront for more that a decade. Now California has updated its breach disclosure laws in order to disclose additional incident data. Most firms adhering to breach notification laws include so little information that the recipients of a breach notification have no clue what it means to them, nor what steps they need to take in order to protect themselves. Credit monitoring services are more of a red herring – and occasionally a devious revenue opportunity for breached companies to offset notification costs. So California Senate Bill 24 (SB-24) requires companies to include additional information on what happened, and explicitly state what type of data was leaked. Will it help? As usual, it depends on what the company decides to put in the letter, but I don’t have high hopes. Will security vendors be pitching monitoring software to aid companies in identifying what was stolen? Absolutely, but many firms’ legal teams will not be eager to have that data hanging around because it’s often a smoking gun, and they will choose ignorance over security to reduce liability. As they always do. – AL Ethics, hypocrisy, and certifications: You have to hand it to Jericho, one of the drivers of attrition.org. He puts the time in to build somewhat airtight cases, usually turning folks’ words against them in interesting ways. I wouldn’t want to take him on in a debate, that’s for sure. His recent post at Infosec Island, clearly pointing out the hypocrisy of the CISSP folks, is a hoot. As usual, you can find all

Share:
Read Post

Data Security Lifecycle 2.0

We reference this content a lot, so I decided to compile it all into a single post. This is the original content, including internal links, and has not been re-edited. Introduction Four years ago I wrote the initial Data Security Lifecycle and a series of posts covering the constituent technologies. In 2009 I updated it to better fit cloud computing, and it was incorporated into the Cloud Security Alliance Guidance, but I have never been happy with that work. It was rushed and didn’t address cloud specifics nearly sufficiently. Adrian and I just spent a bunch of time updating the cycle and it is now a much better representation of the real world. Keep in mind that this is a high-level model to help guide your decisions, but we think this time around we were able to identify places where it can more specifically guide your data security endeavors. (As a side note, you might notice I use “data security” and “information-centric security” interchangeably. I think infocentric is more accurate, but data security is more recognized, so that’s what I tend to use.) If you are familiar with the previous model you will immediately notice that this one is much more complex. We hope it’s also much more useful. The old model really only listed controls for data in different phases of the lifecycle – and didn’t account for location, ownership, access methods, and other factors. This update should better reflect the more complex environments and use cases we tend to see these days. Due to its complexity, we need to break the new Lifecycle into a series of posts. In this first post we will revisit the basic lifecycle, and in the next post we will add locations and access. The lifecycle includes six phases from creation to destruction. Although we show it as a linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages (for example, not all data is eventually destroyed). Create: This is probably better named Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is the generation of new digital content, or the alteration/updating of existing content. Store: Storing is the act committing the digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Data is exchanged between users, customers, and partners. Archive: Data leaves active use and enters long-term storage. Destroy: Data is permanently destroyed using physical or digital means (e.g., cryptoshredding). These high-level activities describe the major phases of a datum’s life, and in a future post we will cover security controls for each phase. But before we discuss controls we need to incorporate two additional aspects: locations and access devices. Locations and Access In our last post we reviewed the Data Security Lifecycle, but other than some minor wording changes (and a prettier graphic thanks to PowerPoint SmartArt) it was the same as our four-year-old original version. But as we mentioned, quite a bit has changed since then, exemplified by the emergence and adoption of cloud computing and increased mobility. Although the Lifecycle itself still applies to basic, traditional infrastructure, we will focus on these more complex use cases, which better reflect what most of you are dealing with on a day to day basis. Locations One gap in the original Lifecycle was that it failed to adequately address movement of data between repositories, environments, and organizations. A large amount of enterprise data now transitions between a variety of storage locations, applications, and operating environments. Even data created in a locked-down application may find itself backed up someplace else, replicated to alternative standby environments, or exported for processing by other applications. And all of this can happen at any phase of the Lifecycle. We can illustrate this by thinking of the Lifecycle not as a single, linear operation, but as a series of smaller lifecycles running in different operating environments. At nearly any phase data can move into, out of, and between these environments – the key for data security is identifying these movements and applying the right controls at the right security boundaries. As with cloud deployment models, these locations may be internal, external, public, private, hybrid, and so on. Some may be cloud providers, other traditional outsourcers, or perhaps multiple locations within a single data center. For data security, at this point there are four things to understand: Where are the potential locations for my data? What are the lifecycles and controls in each of those locations? Where in each lifecycle can data move between locations? How does data move between locations (via what channel)? Access Now that we know where our data lives and how it moves, we need to know who is accessing it and how. There are two factors here: Who accesses the data? How can they access it (device & channel)? Data today is accessed from all sorts of different devices. The days of employees only accessing data through restrictive applications on locked-down desktops are quickly coming to an end (with a few exceptions). These devices have different security characteristics and may use different applications, especially with applications we’ve moved to SaaS providers – who often build custom applications for mobile devices, which offer different functionality than PCs. Later in the model we will deal with who, but the diagram below shows how complex this can be – with a variety of data locations (and application environments), each with its own data lifecycle, all accessed by a variety of devices in different locations. Some data lives entirely within a single location, while other data moves in and out of various locations… and sometimes directly between external providers. This completes our “topographic map” of the Lifecycle. In our next post we will dig into mapping data flow and controls. In the next few posts we will finish covering background material, and

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.