Securosis

Research

Friday Summary: Halloween 2013 Edition

  While you’re thinking about little kids in scary costumes, I’m here thinking about adults who write scary code. As I go through the results of a couple different companies’ code scans I am trying to contrast good vs. bad secure development programs. But I figure I should ask the community at large: What facet of your secure software development program has been most effective? Can you pinpoint one? For many years I felt placing requirements within the development lifecycle (i.e.: process modifications) yielded the greatest returns. I have spoken with many development teams over the past year who said that security awareness training was the biggest benefit, while others most appreciated threat modeling. Still others claimed that external penetration testing or code scans motivated their teams to do better, learn more about software defects, and improve internally. The funny bit is that every team states one of these events was the root cause which raised awareness and motivated changes. Multiple different causes for the same basic effect. I have been privy to the results from a few different code scans at different companies this summer; some with horrific results, and one far better than I could have ever expected, given the age and size of the code base. And it seems the better the results, the harder the development team takes external discoveries of security defects. Developers are proud, and if security is something they pride themselves on, defect reports are akin to calling their children ugly. I am typically less interested in defect reports than in understanding the security program in general. Part of my interest in going through each firm’s secure development program is seeing what changes were made, and which the team found most beneficial. Once again, the key overall benefit reported by each team varies between organizations. Many say security training, but training does not equal development success. Others say “It’s part of our culture”, which is a rather meaningless response, but those organizations do a bit of everything, and they scored considerably better on security tests. It is now clear to me, despite my biases for threat modeling and process changes, that for organizations that have been doing this a while no single element or program that makes the difference. It is the cumulative effect of consistently making security part of code development. Some event started the journey, and – as with any skill – time and effort produced improvement. But overall, improvement in secure code development looks glacial. It is a bit like compound interest: what appears minuscule in the beginning becomes huge after a few years. When you meet up with organizations that have been at it for a long time, it is startling see just how well the basic changes work. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Dave Lewis’s CSO post: “LinkedIn intro: Data, meet security issues”. Juniper blog quotes Adrian on DB-DoS. Adrian’s DR post: Simple is better. Gunner on The Internet-of-Things. Favorite Securosis Posts David Mortman: Don’t Mess with Pen Test(ers). Adrian Lane: Thinking Small and Not Leading. It is unfortunately common to discover that a job is quite different than you thought. And how best to accomplish your goals often involves several rounds of trial and error. Mike Rothman: The Pragmatic Guide to Network Security Management: The Process. Rich had me at Pragmatic… Other Securosis Posts The Pragmatic Guide to Network Security Management: SecOps. Incite 10/30/2013: Managing the Details. New Series: The Executive Guide to Pragmatic Network Security Management. Summary: Planned Coincidence. Favorite Outside Posts Dave Lewis: Buffer Hacked. David Mortman: Adventures in Dockerland. Not a security article but something for security to keep in mind. Docker is making big inroads in the cloud, especially PaaS, so you need to understand it. Adrian Lane: Big Data Analytics: Starting Small. A short post with pragmatic advice. Mike Rothman: Time doesn’t exist until we invent it. As always, interesting perspective from Seth Godin about time… “Ticking away, the moments that make up a dull day…” Gal: Fake social media ID duped security-aware IT guys. Research Reports and Presentations Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Dealing with Database Denial of Service. The 2014 Endpoint Security Buyer’s Guide. The CISO’s Guide to Advanced Attackers. Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Top News and Posts Kristin Calhoun Keynote at API Strategy and Practice WhiteHat has a new secure browser; what does the Firefox say? via Wendy Nather. A More Agile Healthcare.gov NSA Chief: ‘To My Knowledge’ Agency Didn’t Tap Google, Yahoo Data Centers Mozilla Shines A Light With Lightbeam Alleged Hacker V& In DOE, HHS Breaches MongoHQ Suffers Security Breach Blog Comment of the Week This week’s best comment goes to Zac, in response to Don’t Mess with Pen Test(ers). As you say, we try not to focus on or fixate on the potential risks. There are however ways to mitigate or reduce the risk. Foremost for me is to consider any and all electronic transactions to be accessible and therefore never put anything I want to keep private out of electronic records. Just like how in the past you wouldn’t speak of things you wanted to keep private today you don’t post it (Facebook is training people to do all the wrong things). And when you consider that medical offices, tax agencies, government agencies, companies all either experience breaches or just plain send your informaiton to the wrong people… let alone work at getting your informaiton. Or how snail mail post can end up in the wrong mailbox… One may as well stay home due to a fear of being hit by a car while walking the dog. tl;dr – if you want to keep something private… keep it to yourself. Share:

Share:
Read Post

The Pragmatic Guide to Network Security Management: SecOps

  This is part 3 in a series. Click here for part 1, or submit edits directly via GitHub. Workflows: from Sec and Ops to SecOps Even mature organizations occasionally struggle to keep security aligned with infrastructure. But low-friction processes that don’t overly burden other areas of the enterprise reduce both errors and deliberate circumvention. Frequently the problem manifests as a lack of communication between network security and network operations. Not out of antagonism but simply due to different priorities, toolsets, and issues to manage on a day to day basis. A seemingly minor routing change, or the addition of a new server, can quietly expose the organization to new risks if security defenses aren’t coordinated. On the other hand, security can easily break things and create an operational incident with a single firewall rule change. Efficient programs don’t just divide up operational responsibilities – they implement workflows where each team does what they are best at, while still communicating cleanly and effectively to each other. Here are examples of four integrated operations workflows: Network topology changes: Changes to the topology of the network have a dramatic impact on the configuration of security tools. The workflow consists of two tracks – approved changes and detected changes. For approved changes the network team defines the change and submits it to security for review. Security analyzes it for impact, including any risk changes and required security updates. Security then approves the change for operations to implement. Some organizations even have network operations manage basic security changes – mostly firewall rule updates. A detected change goes through the same analysis process but may require an emergency fix or communications with the network team to roll back the change (and obviously requires ongoing monitoring for detection in the first place). In both cases it can be helpful to integrate the process into your change management or workflow tool to automatically route tasks. Business exemption or change requests: Occasionally a business unit will need a change to network security. Many of these come through network operations, but quite a few come from application teams or business units themselves for particular projects. The same basic process is followed – the change request comes in, is analyzed for risks and required changes, and then approved, implemented, and validated. As before, you also should plan to monitor for and manage unapproved changes, which is where application-aware monitoring is particularly helpful. Also, consider making a portal for business units to submit and track requests, rather than handling through email or spreadsheets. New assets and applications: Similar to a business exemption or change request, but focused on new projects and assets rather than creating a special exemption to existing policy. There may be more planning, earlier in the process, with a lot more people involved. Develop a two-track process – one for new applications or assets that are fairly standard (e.g., a business unit file server or basic web application) which can be more automated, and a second for larger programs such as major new applications. New security tools or policy changes: Adding a new security tool or policy change reverses the workflow, so the responsibility is now on the security team to initiate communications with network operations and other affected teams. Security should first analyze the change and potential downstream impacts, then work with teams to determine operational risks, timelines, and any other requirements. Conclusion Network security management isn’t easy, but there are more and less efficient ways to handle it. Knowing your posture and maintaining visibility are key, as are developing core workflows to bridge gaps between different operational teams. Network security operations monitors the environment and change requests to adapt the security posture as needed in a timely manner. It monitors for changes that slip through outside approved processes, develops workflows to handle the unexpected, and responds quickly when changes are requested to support other business areas. Finally, network security understands that security policy changes impact other operations, along with the need to analyze and communicate these potential implications. It is not always easy, but it is far more efficient and effective than the alternatives, and frees up the security team to focus on what they are best at. Share:

Share:
Read Post

Don’t Mess with Pen Test(ers)

Almost everyone you know is blissfully unaware of the digital footprints we all leave, and how that information can be used against us. The problem is that you understand, and if you spent much time thinking about it you’d probably lose your mind. So as a coping mechanism you choose not to think of how you could be attacked or how your finances could be wrecked, if targeted by the wrong person. Just in case you didn’t have enough to worry about today, you can check out this great first-person account of a personal pen test on Pando Daily. A NYU professor challenged the folks at Spider Labs to take a week and find out what they could about him. It wasn’t pretty. But then again, you knew that’s how the story would end. What I learned is that virtually all of us are vulnerable to electronic eavesdropping and are easy hack targets. Most of us have adopted the credo “security by obscurity,” but all it takes is a person or persons with enough patience and know-how to pierce anyone’s privacy – and, if they choose, to wreak havoc on your finances and destroy your reputation. The story details the team’s attempts to gain presence on his network and then devices. They finally went through the path of least resistance: his wife. The tactics weren’t overly sophisticated. But once armed with some basic information it was game over. The pen testers gained access to his bank accounts, brokerage information, phone records, and the like. What do we accomplish by reminding ourselves of the risks of today’s online life? Nothing. You know the risks. You take the risks. The benefits outweigh the risks. And now I’ll crawl back into my fog to become once again blissfully unaware. Share:

Share:
Read Post

Incite 10/30/2013: Managing the Details

  As I wrote a few weeks ago, everyone has their strengths. I know that managing the details is not one of mine. In fact I can’t stand it, which is very clear as we prepare for our oldest daughter’s Bat Mitzvah this weekend. It’s a right of passage signaling the beginning of adulthood. I actually view it as the beginning of the transformation to adulthood, which is a good way to look at it because many folks never complete that transition – at least judging from the way they behave. Coming back to the topic at hand, the sheer number of details to manage between the Friday night dinner, refreshments after the Friday service, the luncheon after the Saturday ceremony, the big party we’re throwing Saturday night, and the brunch on Sunday, are crazy. The Boss has mostly done nothing besides manage all those details for the past 6 months, and was immersed in the process for the year before that. I am thankful she shielded me from having to do much, besides lug some things from place to place and write a few (okay – a lot) of checks. We have many great friends who have helped out, and without them we would have been sunk. So many things have to be decided that you don’t even think about. Take lighting, for instance. Who cares about the lights? No one, unless the place is either too dark or too light. The proximity of the tables to the speakers? Yup, that needs to be managed because some folks have sensitive ears and can’t be too close to the dance floor. Who knew? The color of the tablecloths is important – it needs to match the seat covers and napkins. The one detail I did get involved in was the liquor. You can bet I was going to have a say in what kind of booze we had for the party. That’s a detail I can get my arms around. And I did. There will be Guinness. And it will be good. When we first went through the plans and the budget I was resistant. It’s hard to fathom spending the GNP of a small nation in one night. But as we get closer, I’m glad we are making a huge event. It’s very very rare that we get together with most of the people we care about to celebrate such a happy occasion. I can (and will) make more money, but I don’t know how many more opportunities I’ll have to share such happiness with my parents and in-laws. So I will enjoy this weekend. I’m not going to think about what it costs or how many webcasts I had to do to pay for it. I will be thankful that we are in a position where we can throw a big party to celebrate the fact that XX1 is growing up. I am going to appreciate all the work she put in to get ready to lead the services on Friday and Saturday. She has probably put in another 10-15 hours a week in preparation, on top of her schoolwork and rigorous dance schedule. She hasn’t slept much the past few weeks. It’s important that I savor the experience. I have been bad at that in the past. I will talk to all the people who traveled great distances to celebrate with us, and who I don’t get to see often. I’m going to smile. A lot. And lastly, I will follow Alan Shimel’s advice to not get so drunk I need to watch the video to remember what happened at the party. That’s probably the best piece of advice anyone could have given me. You don’t get many chances to see your baby girl in the spotlight. You might as well remember it. –Mike Photo credit: “Whiteboard of the now: The To-Do list” originally uploaded by Jenica Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Security Awareness Training Evolution Quick Wins Focus on Great Content Why Bother? Executive Guide to Network Security Management New Series: The Executive Guide to Pragmatic Network Security Management Defending Against Application Denial of Service Introduction Newly Published Papers Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U Stories make the point: Any effective communicator is a storyteller. People understand stories. Folks can find applicability to whatever situation they are in through a carefully crafted fable or analogy. When trying to create urgency for something as obscure as a malware attack (It does what? Why do I care about that?), it helps to have a way to relate it to non-security folks. The Analogies Project is a new initiative aiming to assemble a library of analogies about security that anyone can use to make specific points. I haven’t read them all, but a few were pretty good. Those of us in the business for a long time, and who communicate for a living, have a ton of stories from our travels through over years. But for those of you who don’t, there is bound to be an analogy that will resonate with the person you are trying to persuade. Check it out. – MR Who are you? Adrian and I have both been talking about different aspects of identity management in the cloud lately. Why should you care? Because if you don’t adopt some sort of federated identity option your life will be a screaming poopstorm of pain until the end of time. No, I’m not exaggerating. I can barely manage a dozen employee accounts on

Share:
Read Post

The Pragmatic Guide to Network Security Management: The Process

  This is part 2 in a series. Click here for part 1, or submit edits directly via GitHub. The Pragmatic Process As mentioned in the previous section, this process is designed primarily for more complex networks, and takes into account real-life organizational and technological complexities. Here is the outline, followed by the details: Know your network. Know your assets. Know your security. Map the topology. Prioritize and fix. Monitor continuously. Manage change and build workflows. The first five steps establish the baseline, and the next two manage the program, although you will need to periodically revisit previous steps to ensure your program stays up to date as the business evolves and risks change. Know Your Network You can’t secure what you don’t know, but effectively mapping a network topology – especially for a large network – can be daunting. Many organizations believe they have accurate network topologies, but they are rarely correct or complete – for all the reasons in the previous section. The most common problem is simply failure to keep up-to-date. Topology maps are produced occasionally as needed for audits or projects, but rarely maintained. The first step is to work with Network Operations to see what they have and how current it is. Aside from being politically correct, there is also no reason not to leverage what is already available. Position it as “We need to make sure we have our security in the right places,” rather than “We don’t trust you.” Once you get their data, evaluate it and decide how much you need to validate or extend it. There are a few ways to validate your network topology, and you should rely on automation when possible. Even if your network operations team provides a map or CMDB, you need to verify that it is current and accurate. One issue we see at times is that security uses a different toolset than network operations. Security scanners use a variety of techniques to probe the network and discover its structure, but standard security scanners (including vulnerability assessment tools) aren’t necessarily well suited to building out a complete network map. Network operations teams have their own mapping tools, some of which use similar scanning techniques, but add in routing and other analyses that rely on management-level access to the routers and network infrastructure. These tools tend to rely more on trusting the information provided to them and don’t probe as heavily as security tools. They also aren’t generally run organization-wide on a continuous basis, but are instead used as needed for problem-solving and planning. Know Your Assets Once you have a picture of the network you start evaluating the assets on it: servers, endpoints, and other hardware. Security tends to have better tools and experiences for scanning and analyzing assets than underlying network structure, especially for workstations. Depending on how mature you are at this point, either prioritize your scanning to particular network segments or use the information from the network map to target weak spots in your analysis. Endpoint tools such as configuration/patch management or endpoint protection platforms offer some information, but you also need to integrate a security scan (perhaps a vulnerability assessment) to identify problems. As before, this really needs to be a continuous process using automated tools. You also need a sense of the importance of the assets, especially in data centers, so you can prioritize defenses. This is a tough one, so make your best guesses if you have to – it doesn’t need to be perfect. Know Your Security You need to collect detailed information on three major pieces of network security: Base infrastructure security. This includes standard perimeter security, and anything you have deployed internally to enforce any kind of compartmentalization or detection. Think firewalls (including NGFW), intrusion detection, intrusion prevention, network forensics, Netflow feeds to your SIEM, and similar. Things designed primarily to protect the core network layer. Even network access control, for both of you using it. Extended security tools. These are designed to protect particular applications and activities, such as your secure mail gateway, web filter, web application firewalls, DLP, and other “layer 7” tools. Remote access. Security tends to be tightly integrated into VPNs and other remote access gateways. These aren’t always managed by security, but unlike network routers they have internal security settings that affect network access. For each component collect location and configuration. You don’t need all the deep particulars of a WAF or DLP (beyond what they are positioned to protect), but you certainly need complete details of base infrastructure tools. Yes, that means every firewall rule. Also determine how you manage and maintain each of those tools. Who is responsible? How do they manage it? What are the policies? Map the Topology This is the key step where you align your network topology, assets (focusing on bulk and critical analysis, not every single workstation), and existing security controls. There are two kinds of analysis to then perform: A management analysis to determine who manages all the security and network assets, and how. Who keeps firewall X up and running? How? Using which tool? Who manages the network hardware that controls the routing that firewall X is responsible for? Do you feed netflow data from this segment to the SIEM? IDS alerts? The objective is to understand the technical underpinnings of your network security management, and the meatspace mapping for who is responsible. A controls analysis to ensure the right tools are in the right places with the right configurations. Again, you probably want to prioritize this by assets. Do you use application-aware firewalls (NGFW) where you need them? Are firewalls configured correctly for the underlying network topology? Do you segment internal networks? Capture network traffic for detecting attacks in the right places? Are there network segments or locations that lack security controls because you didn’t know about them? Is that database really safe behind a firewall, is or it totally unprotected if a user clicks the wrong link in a phishing email?

Share:
Read Post

Thinking Small and Not Leading

Dave Elfering had a good post, making clear the difference between managing and leading. I thought my job as a security leader was to produce detailed policies that might as well have been detailed pseudo code executed by robots. If you are tasked with truly leading the security program for a company or organization then lead; quit trying to be a combination of the thought police and baby sitter. Detailed policies are necessary in some circumstances but overall they are unsustainable. Let’s dive back into the Army manual [Army Planning and Orders Production FM 5-0] for a moment. “Effective planning incorporates the concept of mission command… concentrates on the objective of an operation and not on every detail of how to achieve that objective.” I always talked about managing to outcomes when I had corporate jobs. I didn’t want to tell folks how to get things done. I just told them what needed to be done and figured they could figure it out. Mostly because half the time I wasn’t sure what to do, and the other half of the time I was too lazy to do it for them. Kidding aside, that’s how I learned the most. It’s not much different in security. You need to lead your security program with a light touch. Think big picture objectives, and as Dave says, managing intent. Not task lists, which is small thinking. You can’t make folks within the business do things – not over the long term, anyway. Hell, most of the time you can’t even make your own team do things. So you need to persuade them that it’s in their best interests to do so. So you need to lead, not just manage to the details, expecting your employee base to just get it. This is not easy. It’s usually easier to write the policy and become Dr. No. But that approach also means you’ll be looking for another job in the near term. More stuff they don’t teach you in any of those security certification classes, eh? Photo credit: “If you are not the lead dog your view never changes #grommet” originally uploaded by Nic Wise Share:

Share:
Read Post

New Series: The Executive Guide to Pragmatic Network Security Management

This is the first post in a new paper I’m writing. The entire paper is also posted on GitHub for direct feedback and suggestions. As an experiment, I prefer feedback on GitHub, but will also take it here, as usual. The Demise of Network Security Has Been Greatly Exaggerated DLP, IPS, NGFW, WAF. Chief Information Security Officers today suffer no shortage of network security tools to protect their environments, but most CISOs we talk with struggle to implement and maintain an effective network security program. They tell us it isn’t a lack of technologies or even necessarily resources (not that there are ever enough), but the inherent difficulties in defending a large, amorphous, business-critical asset with tendrils throughout the organization. It’s never as simple as magazine articles and conference presentations make it out to be. Managing network security at scale is not easy, but the organizations that do it the best tend to follow a predictable, repeatable pattern. This paper distills those lessons into a pragmatic process designed for larger organizations and those with more complicated networks (such as medium-sized businesses with multiple locations). We won’t make the false claim that our process is magical or easy, but it’s certainly easier than many alternatives. Even if you only pick out a few tidbits, it should help you refine and operate your network security more efficiently. The network is the aspect of our infrastructure that ties everything else together. The more we can do to efficiently and effectively secure it, the better. Why Network Security Is So Darn Difficult Networks and endpoints are the two most fundamental pieces of our IT infrastructure, yet despite decades of advancements they still consume a disproportionate amount of our security resources. First the good news – we are far more resilient to network attacks than even five years ago. The days of Internet-wide worms knocking down enterprises while script kiddies deface websites are mostly in the past. But every CISO knows establishing and maintaining network security is a constant challenge, even if they can’t always articulate why. We have narrowed down a handful of root causes, which this Pragmatic process is designed to address: Security and operations are divided. IT Operations is responsible for and manages the network, servers, endpoints, and applications, and information security is responsible for defending everything. Basically, security protects the enterprise from the outside – lacking insight into what is being protected, where it is, and how everything connects together. In many cases security doesn’t even know how all the pieces of the network are connected, but is still expected to manage firewall rules to protect it. Many of our recommendations are designed to bridge this divide without throwing away traditional organizational boundaries. Networks are dynamic and complex. Not only are new assets constantly joining and leaving the network, but its structure is never static, especially for larger organizations. Organic growth. All networks grow over time. Perhaps it’s a new office, extending a WiFi network, or an extra switch or router in the datacenter. Not all of these have major security implications but they add up over time. Mergers and acquisitions require blending resources, technologies, and different configurations. New technologies with different network requirements are constantly added, from a new remote access portal to an entire private cloud. We mix and match various security tools, often with overlapping functionality. This is sometimes a result of different branches of the company operating partially or completely autonomously, and other times results from turnover, project requirements, or keeping auditors happy. Needs change over time. Many organizations today are working on consolidating network perimeters, compartmentalizing internal networks, adding application awareness, expanding egress monitoring and filtering for breach and infection defenses, or adapting the network for cloud computing and eventually SDN. Network and network security technologies evolve to meet new business needs and evolving threats. Our networks are large and complex, sometimes even when our organizations aren’t. They change constantly, as do the assets connected to them. Security doesn’t manage this infrastructure, but is tasked with protecting it. Network Security Management is about improving both security and efficiency to keep up. From Blocking and Tackling to Integrated Defense Our primary goal is to adopt processes that are flexible enough to account for an ever-changing network environment, while avoiding the constant firefighting that is so inefficient. The key isn’t any particular technology or security trick, but better integrating defenses into day-to-day management of the enterprise. What makes it pragmatic? The fact that the process is designed to work in the real world, without gutting or stumbling over organizational and bureaucratic divisions. We get it – even if you are the CEO, there are limits to change. We have collected the best practices we have seen work in the real world, lining them up in a practical and achievable process that accounts for real-world restrictions. Our next sections will dig into the process. As we said earlier, pick and choose those which work for you. Share:

Share:
Read Post

Summary: Planned Coincidence

Every year Mike, Adrian, and I get together for a couple days to review our goals and financials, and to make plans for the next year. This year we scheduled it in Denver, and by an amazing coincidence Jimmy Buffett was in town playing. Really. I promise. Total coincidence. I have been to more than my fair share of shows (and have to write this Summary on Wednesday because I will be at another show Thursday in Phoenix), but it was Mike’s first and Adrian’s second. Needless to say, a good time was had by everyone except Mike’s stomach. I warned him about the rum-infused gummy bears. 2013 was kind of a strange year for us. It looks like we grew, again, but a lot of it was shoveled into Q4. All three of us are running all over the place and cramming on projects and papers, hoping our children and pets don’t forget what we look like. I even thought about skipping our planning, but setting the corporate strategy is even more important than our other projects. I went into this trip with an open mind. I knew I wanted to change things up a bit next year, but not exactly how. In part to do more direct end-user engagement, but also to allow me to continue my more in-depth and technical cloud and Software Defined Security work, which isn’t necessarily easily dropped into licensed papers and webcasts. We actually came up with some killer ideas that are pretty exciting. I don’t know if they will work, but I think they hit a sweet spot in the market, and fit our skills and focus. It’s definitely too early to talk about them, and they aren’t as insane as building a new software platform, so launching won’t be a problem at all. We are going to hold back until January to start releasing because we need to finish the current workload and do the prep for the new shiny endeavors before we can talk about them. And this is a great situation to be in. I just spent two days hanging with two of my closest friends and my business partners, catching a Buffett show and planning out new tricks for our collective future. I’m tired, and my brain is fried, but as I go back to the grindstone of the road and writing, I not only get to finish my year with some cool research, but I get to start planning some even more exciting things for next year. Not bad. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich presenting on changes in the crypto landscape, October 30th. Favorite Securosis Posts Mike Rothman: The Great Securosis GitHub Experiment. That Mogull guy. Always pushing the envelope on openness and transparency. Interesting idea to use Github to manage feedback on our papers. Will be interesting to see if it works… Rich: Security Analysis of Pseudo-Random Number Generators with Input: /dev/random is not Robust. I spent last week in crypto training and this paper is darn interesting. Adrian: Incite 10/23/2013: What goes up…. David Mortman: Don’t Cry Over Spilt Metrics Other Securosis Posts Security Awareness Training Evolution: Quick Wins. Favorite Outside Posts Mike Rothman: Dan Geer’s Tradeoffs in Cyber Security talk. Dan Geer spoke. Dan Geer is awesome. Read. It. Now. And that’s all I have to say about that. Adrian Lane: iMessage Privacy. Regardless of whether you agree with Apple’s strategy, the post is a very educational look at security and how attackers approach interception. David Mortman: How to lose $172,222 a second for 45 minutes. Gal Shpantzer: Why the Sistrunk ICS/SCADA vulns are a big deal. Research Reports and Presentations Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Dealing with Database Denial of Service. The 2014 Endpoint Security Buyer’s Guide. The CISO’s Guide to Advanced Attackers. Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Top News and Posts Apple and Adobe sandbox Flash in Safari on OS X 10.9. Google%20launches%20new%20anti-DDoS%20service%20called%20’Project%20Shield’ Apple iMessage Open to Man in the Middle, Spoofing Attacks. Yes and no, and I wish I wasn’t traveling so much and could clarify how this appears to be overstated. Technically Apple could man in the middle, but it isn’t something random employees can do, nor do I think Apple would do it without a massive legal threat from the NSA or equivalent, which they would probably fight. Not that it couldn’t happen… Blog Comment of the Week This week’s best comment goes to DS, in response to Incite 10/23/2013: What goes up…. We’ve known for years (or should have known if we read the research) that security breaches don’t impact stock value. This is a trap many security folks find themselves in because they don’t understand their business, or business at all, so they use the most obvious and coarse metric of business impact. … The impact from a breach is complex and cannot be measured by one factor. There are fines and penalties. There are negative perceptions which can be leveraged against you (I can’t say how many sales calls I got from RSA competitors after their breach), there is lost productivity from having to divert resources to deal with customer complaints, there is lost focus on strategy while execs try to deal with the press requests and client enquires. RSA’s breach cost around 100M if you believe the press. This is 100M not spent on developing new products or landing new customers, but instead spent preserving their base and protecting SecureID. This is not 100M well spent. Share:

Share:
Read Post

Don’t Cry over Spilt Metrics

Our man Gunnar starts a recent post with: Security Metrics crying need is for metrics that serve others, outside of info sec. Then he proceeds to talk about the need to develop appropriate metrics for constituencies outside of security – including developers, DBAs, Q/A folks, and Operations. Given his application-centric view of the world, those folks clearly need to understand security and have metrics to evaluate effectiveness, posture, etc. I have lots of conversation with senior security folks who are similarly perplexed about how to communicate value via metrics to another reasonably important set of influencers: Senior Management. It’s not an easy problem to solve, and there are no generic answers. I can’t just give you a list of metrics and send you on your way, because the metrics need to be meaningful to your business. Not another person’s business, but yours. And that means you need to understand your business and its critical success factors, and communicate your value through the PRISM (no pun intended…) of that view. Photo credit: “don’t cry over spilled milk” originally uploaded by Joel Montes Share:

Share:
Read Post

Incite 10/23/2013: What goes up…

  Every so often I realize how spoiled I am. Sure, I am more aware of my good fortune than many, but I definitely take way too much stuff for granted. My health is good. I do what I like (most days). My family still seems to like me. I provide enough to live a pretty good lifestyle. It’s all good. I don’t have much to complain about. The fact that one of my biggest problems is that my favorite NFL teams are a combined 3-10 is a good thing, right? You get spoiled when your favorite teams are competitive at the end of the season and usually make the playoffs. New England fans know what I mean. So do Pittsburgh and Baltimore fans. When the team doesn’t perform up to expectations (like this year’s Falcons), it’s jarring. You dream of Super Bowl fairies in August, then lose half your starting team to injuries, and by October you are making alternative plans for Divisional weekend. So when the NY football Giants got their first win on Monday night, I heaved a major sigh of relief. Having watched a bunch of their games, I had legitimate concerns that they wouldn’t win a game all season. Seeing them beat up hapless Minnesota didn’t really allay my fears too much. The G-men aren’t a very good football team right now, and face a significant rebuild over the next few years. Oh well, that’s the way it goes in the NFL. In baseball and basketball, the soft salary cap just means owners have to pay a tax to buy a competitive team. And that’s what some owners do year in and year out. But that’s not an option in the NFL. The cap is the cap, and that means tough decisions are made. Great players are let go. And what goes up for a little while (usually on the shoulders of a franchise QB) inevitably comes down. Parity is great, until your team is on the wrong side. It will be interesting to see how teams with younger QBs – like the 49ers, Seahawks, Redskins, and Colts – manage their salary caps once their QBs start getting $20MM a year and eating up 15-20% of the cap. These teams can stock up now on expensive players while their QBs are cheap, but won’t be able to in 2-3 years. They will need to make tough decisions. What goes up, eventually comes down. At least in the NFL. Then there are teams that don’t seem to ever come up. Jacksonville hasn’t been competitive for a decade. Detroit has been to the playoffs once in like 20 years. St. Louis is in the same boat. And I won’t even mention Cleveland. These long-suffering fans should be applauded for showing up and being passionate, even where there isn’t much to cheer about. So I’ll keep the faith. I know all NFL teams have off years, and my teams do things the right way to produce winning seasons more often than losing ones. I’ll let go of the Super Bowl fairy this year, and I’ll be able to enjoy the rest of the season with reasonable expectations. Which is probably how I should be treating each new season anyway. Nah, forget that. Without chasing the Super Bowl fairy, what fun is it? –Mike Photo credit: “IZ NOT AKKCIDENT” originally uploaded by Aaron Muszalski Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Security Awareness Training Evolution Quick Wins Focus on Great Content Why Bother? Defending Against Application Denial of Service Introduction Newly Published Papers Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U If business users don’t care… We are screwed as an industry. Daniel Miessler works through a thought experiment, wondering what would happen if business users realized that getting hacked doesn’t necessarily affect company value. Wouldn’t it be logical from a shareholder perspective to minimize security spend and maximize profit? To be clear, lots of organizations already do this, but I doubt it as a conscious decision not to be secure. Daniel evaluates Apple, Adobe, and the granddaddy of high-profile breaches, TJX – and finds no negative impact from those breaches. Awesome, but we already knew that in a recession people choose cheap underwear over security. It is an interesting concept, and over the long term I believe the impact of breaches is far overblown. But what about in the short term? I’m not sure market value is the best determinant of short-term value – it’s a long-term metric. Instead I would rather try to understand the impact on short-term revenue. Do customers defer deals or reduce spending in the immediate aftermath of a breach? That would be a much more interesting analysis. And I guess we should say a few thank-yous to China and compliance, which are still the engines driving security. – MR Techno two-fer: I have taken to calling big data the new normal for databases. One architectural theme I see over and over again for security analysis is the two-headed cluster: Hadoop for analytics and Cassandra/Splunk/Mongo for fast references or lookup. Consider this today’s take on normalization and correlation. Rajat Jain has a very good illustration of this concept with Lambda Architecture for batch data, which balances fast lookup against historic views of data. A batch layer – often Hadoop – computes views on your data as it comes in, and a second parallel high-speed processing layer – in this case Storm – constantly processes the most recent data in near-real-time. This enables the system to

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.