Securosis

Research

Blowing Your Mind(fulness) at RSA 2014

It was kind of a joke between two friends on a journey to become better people. Jen Minella (JJ) and I compared notes over way too many drinks at last year’s RSA, and we decided our experiences would make a good talk. I doubt either of us really thought it would be interesting to anyone but us. We were wrong. At RSA we will do a session called “Neuro-hacking 101: Taming Your Inner Curmudgeon”. Here is the description: For self-proclaimed security curmudgeons and anyone else searching for better work/life balance, this session is a how-to guide for happiness, health and finding a paths to increased productivity. Case studies, methods and research in the science of mind and body are followed up with resources and ways to get started. From neuroscience to nutrition, there’s something for everyone. JJ summed up her thoughts on the pitch, and I feel pretty much the same way. And so today, I’m overjoyed, a little relieved, excited at the opportunity, and yet at the same time a big piece of me is completely mortified. This talk, although founded in science, is a big lift of ol’ virtual skirt. It’s a talk about being happy, getting a grip on life, and using mindfulness to succeed and excel at everything you do. We do not pass go, we do not collect $200. Instead, we’re taking a nose dive into traditionally taboo topics and expose what many consider to be deportments of an intimate and personal nature. But we reached a mutual conclusion – how we think and communicate about the topics of mindfulness shouldn’t be secreted. There’s no shame in participating in activities (or inactivity) designed to make us better, happier, more productive people. I don’t wear a virtual skirt, but it is a bit scary to provide a view into the inner workings of my improvement processes to be less grumpy and more content with all I have achieved. I’ve talked about some of those topics in past Incites, but never to this degree. And that’s good. No, it’s great. Hope to see you there. Share:

Share:
Read Post

Summary: Hands on

  Before I dive into this week’s sermon, just a quick note that our posting will be a bit off through the end of the year. As happens from time to time, our collective workloads and travel are hitting insanity levels, which impedes our ability to push out more consistent updates. But, you know, gotta feed the kids and dogs. A couple weeks ago I got to abandon my family during the weekend and spend my time in a classroom renewing my Emergency Medical Technician certification. I was close to letting it go, but my wife made it abundantly clear that she would rather lose me for a weekend than deal with the subsequent years of whining. I never look forward to my recert classes. It is usually 2-3 days in a classroom, followed by a written and psychomotor (practical) test. I first certified as an EMT in 1991, and then became a paramedic in 1993 (which is an insane amount of training – no comparison). I won’t say I don’t learn anything in the every-two-year refresher classes, but I have been doing this for a very long time. But this year I learned more than expected, and some of it relates directly to my current work in security. Five or six years ago I started hearing about some new trends in CPR. A doctor here in Phoenix started a research study to try a completely nonconventional approach to CPR. The short version is that the human body, when dead, isn’t using a ton of oxygen. Even when alive we inhale air with 21% O2 and exhale air with 16% O2. Stop all muscular activity and the brain will mostly suck out whatever O2 is circulated when you compress someone’s chest. This doc had some local fire departments use hands-only CPR and 300 compressions with no ventilations. This keeps the blood pressure up and blood circulating, and the action of pushing the chest generates more than enough air exchange. The results? Something like 3x the survival rates. The CPR you learn today probably isn’t there yet, but definitely emphasizes compressions more than mouth-to-mouth, which I suspect will be dropped completely for adults if the research holds. There’s more to it, but you get the idea. All right, interesting enough, but what does this have to do with security? I found myself instinctively clinging to my old concepts of the ‘right’ way to do CPR despite clear evidence to the contrary. I understand the research, and immediately adopted the changes, but something felt wrong to me. I have been certified in what are basically the same essential techniques for nearly 30 years. Part of me didn’t want to let go, and that wasn’t a feeling I expected. I later had the same reaction to changes in the treatment of certain closed head injuries, but that more due to specific cases where I used techniques now known to harm patients. I am an evidence-based guy. I roll with the times and try not to cling to convention, but somewhere in me, especially as I get older, part of the brain reacts negatively to changing old habits. Fortunately, my higher-order functions know to tell that part to shut the hell up. We have a tendency to imprint on whatever we first learn as ‘correct’. Perhaps it was the act of discovery, or forming those brain pathways. In security we see this all the time. I once had an IT director tell me he would rather allow Windows XP on his network over iPads, because “we know XP”. Wrong answer. The rate of change in security exceeds that of nearly every other profession. Even developers can often cling to old languages and constructs, and that profession is probably the closest. I like to think of myself as an enlightened guy capable of assimilating the latest and greatest within the context of what’s known to work, and I still found myself clinging to a convention after it was scientifically proven wrong. I don’t think any of us are in a position to blame others for “not getting it”. All of us are luddards – you just need to hunt for the right frame of reference. That is not an excuse, but it is life. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Nada. Unless Google and Bing are both lying to me. Like I said, busy week. Favorite Securosis Posts Adrian Lane: Microsoft Upends the Bug Bounty Game. This may work. Mike Rothman: Microsoft Upends the Bug Bounty Game. Not a lot of choice this week (yes, I have been the suck at blogging lately). But Rich does a nice job explaining the ripple effects of Microsoft extending their bounty program. Rich: New Series: The Executive Guide to Pragmatic Network Security Management. The post isn’t new, but I can announce that RedSeal Networks intends to license it (pending the end of our open peer review process). And don’t forget that this is the first papare we are opening up for full public change tracking on GitHub. Other Securosis Posts Friday Summary: Halloween 2013 Edition. Favorite Outside Posts Adrian Lane: I Love the Smell of Popcorn in the Morning. Why did I choose to never be a CIO again? This is why. You’d think this type of story would be rare, but it’s common. However, it only occurs at 2:00am or on your first day of vacation. Mike Rothman: Five Styles of Advanced Threat Defense. The Big G does a decent job of explaining the overlap (and synergy) of these so-called Advanced Threat product categories. I differ slightly on how to carve things up but this is close enough for me to mention. Rich: IT Security from the Eyes of Data Scientists. Yep, serious job security if you head down this path. Research Reports and Presentations Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Dealing with

Share:
Read Post

Microsoft Upends the Bug Bounty Game

  Microsoft is expanding its $100k bounty program to include incident responders who find and document Windows platform mitigation flaws. Today’s news means we are going from accepting entries from only a handful of individuals capable of inventing new mitigation bypass techniques on their own, to potentially thousands of individuals or organizations who find attacks in the wild. Now, both finders and discoverers can turn in new techniques for $100,000. Our platform-wide defenses, or mitigations, are a kind of shield that protects the entire operating system and all the applications running on it. Individual bugs are like arrows. The stronger the shield, the less likely any individual bug or arrow can get through. Learning about “ways around the shield,” or new mitigation bypass techniques, is much more valuable than learning about individual bugs because insight into exploit techniques can help us defend against entire classes of attack as opposed to a single bug – hence, we are willing to pay $100,000 for these rare new techniques. This is important because Microsoft just turned every target and victim into a potential bug hunter. The pool of people looking for these just increased massively. Previously only security researchers could hunt these down and win the cash. Researchers can be motivated to sell bugs to governments or criminals for more then $100k (Windows mitigation exploits are extremely valuable). Some professional response teams like to keep exploit details and indicators of compromise trade secrets, but not every response team is motivated that way. This alters the economics for attackers, because they now need to be much more cautious in using their most valuable 0day exploits. If they attack the wrong target they are more likely to lose their exploit forever. As exciting as this is, it still requires a knowledgeable defender who isn’t financially motivated to keep it secret (again, some vendors and commercial IR services). And there are plenty of lower-level attacks that still work. But even with those stipulations the pool of hunters just increased tremendously. Share:

Share:
Read Post

Friday Summary: Halloween 2013 Edition

  While you’re thinking about little kids in scary costumes, I’m here thinking about adults who write scary code. As I go through the results of a couple different companies’ code scans I am trying to contrast good vs. bad secure development programs. But I figure I should ask the community at large: What facet of your secure software development program has been most effective? Can you pinpoint one? For many years I felt placing requirements within the development lifecycle (i.e.: process modifications) yielded the greatest returns. I have spoken with many development teams over the past year who said that security awareness training was the biggest benefit, while others most appreciated threat modeling. Still others claimed that external penetration testing or code scans motivated their teams to do better, learn more about software defects, and improve internally. The funny bit is that every team states one of these events was the root cause which raised awareness and motivated changes. Multiple different causes for the same basic effect. I have been privy to the results from a few different code scans at different companies this summer; some with horrific results, and one far better than I could have ever expected, given the age and size of the code base. And it seems the better the results, the harder the development team takes external discoveries of security defects. Developers are proud, and if security is something they pride themselves on, defect reports are akin to calling their children ugly. I am typically less interested in defect reports than in understanding the security program in general. Part of my interest in going through each firm’s secure development program is seeing what changes were made, and which the team found most beneficial. Once again, the key overall benefit reported by each team varies between organizations. Many say security training, but training does not equal development success. Others say “It’s part of our culture”, which is a rather meaningless response, but those organizations do a bit of everything, and they scored considerably better on security tests. It is now clear to me, despite my biases for threat modeling and process changes, that for organizations that have been doing this a while no single element or program that makes the difference. It is the cumulative effect of consistently making security part of code development. Some event started the journey, and – as with any skill – time and effort produced improvement. But overall, improvement in secure code development looks glacial. It is a bit like compound interest: what appears minuscule in the beginning becomes huge after a few years. When you meet up with organizations that have been at it for a long time, it is startling see just how well the basic changes work. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Dave Lewis’s CSO post: “LinkedIn intro: Data, meet security issues”. Juniper blog quotes Adrian on DB-DoS. Adrian’s DR post: Simple is better. Gunner on The Internet-of-Things. Favorite Securosis Posts David Mortman: Don’t Mess with Pen Test(ers). Adrian Lane: Thinking Small and Not Leading. It is unfortunately common to discover that a job is quite different than you thought. And how best to accomplish your goals often involves several rounds of trial and error. Mike Rothman: The Pragmatic Guide to Network Security Management: The Process. Rich had me at Pragmatic… Other Securosis Posts The Pragmatic Guide to Network Security Management: SecOps. Incite 10/30/2013: Managing the Details. New Series: The Executive Guide to Pragmatic Network Security Management. Summary: Planned Coincidence. Favorite Outside Posts Dave Lewis: Buffer Hacked. David Mortman: Adventures in Dockerland. Not a security article but something for security to keep in mind. Docker is making big inroads in the cloud, especially PaaS, so you need to understand it. Adrian Lane: Big Data Analytics: Starting Small. A short post with pragmatic advice. Mike Rothman: Time doesn’t exist until we invent it. As always, interesting perspective from Seth Godin about time… “Ticking away, the moments that make up a dull day…” Gal: Fake social media ID duped security-aware IT guys. Research Reports and Presentations Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Dealing with Database Denial of Service. The 2014 Endpoint Security Buyer’s Guide. The CISO’s Guide to Advanced Attackers. Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Top News and Posts Kristin Calhoun Keynote at API Strategy and Practice WhiteHat has a new secure browser; what does the Firefox say? via Wendy Nather. A More Agile Healthcare.gov NSA Chief: ‘To My Knowledge’ Agency Didn’t Tap Google, Yahoo Data Centers Mozilla Shines A Light With Lightbeam Alleged Hacker V& In DOE, HHS Breaches MongoHQ Suffers Security Breach Blog Comment of the Week This week’s best comment goes to Zac, in response to Don’t Mess with Pen Test(ers). As you say, we try not to focus on or fixate on the potential risks. There are however ways to mitigate or reduce the risk. Foremost for me is to consider any and all electronic transactions to be accessible and therefore never put anything I want to keep private out of electronic records. Just like how in the past you wouldn’t speak of things you wanted to keep private today you don’t post it (Facebook is training people to do all the wrong things). And when you consider that medical offices, tax agencies, government agencies, companies all either experience breaches or just plain send your informaiton to the wrong people… let alone work at getting your informaiton. Or how snail mail post can end up in the wrong mailbox… One may as well stay home due to a fear of being hit by a car while walking the dog. tl;dr – if you want to keep something private… keep it to yourself. Share:

Share:
Read Post

The Pragmatic Guide to Network Security Management: SecOps

  This is part 3 in a series. Click here for part 1, or submit edits directly via GitHub. Workflows: from Sec and Ops to SecOps Even mature organizations occasionally struggle to keep security aligned with infrastructure. But low-friction processes that don’t overly burden other areas of the enterprise reduce both errors and deliberate circumvention. Frequently the problem manifests as a lack of communication between network security and network operations. Not out of antagonism but simply due to different priorities, toolsets, and issues to manage on a day to day basis. A seemingly minor routing change, or the addition of a new server, can quietly expose the organization to new risks if security defenses aren’t coordinated. On the other hand, security can easily break things and create an operational incident with a single firewall rule change. Efficient programs don’t just divide up operational responsibilities – they implement workflows where each team does what they are best at, while still communicating cleanly and effectively to each other. Here are examples of four integrated operations workflows: Network topology changes: Changes to the topology of the network have a dramatic impact on the configuration of security tools. The workflow consists of two tracks – approved changes and detected changes. For approved changes the network team defines the change and submits it to security for review. Security analyzes it for impact, including any risk changes and required security updates. Security then approves the change for operations to implement. Some organizations even have network operations manage basic security changes – mostly firewall rule updates. A detected change goes through the same analysis process but may require an emergency fix or communications with the network team to roll back the change (and obviously requires ongoing monitoring for detection in the first place). In both cases it can be helpful to integrate the process into your change management or workflow tool to automatically route tasks. Business exemption or change requests: Occasionally a business unit will need a change to network security. Many of these come through network operations, but quite a few come from application teams or business units themselves for particular projects. The same basic process is followed – the change request comes in, is analyzed for risks and required changes, and then approved, implemented, and validated. As before, you also should plan to monitor for and manage unapproved changes, which is where application-aware monitoring is particularly helpful. Also, consider making a portal for business units to submit and track requests, rather than handling through email or spreadsheets. New assets and applications: Similar to a business exemption or change request, but focused on new projects and assets rather than creating a special exemption to existing policy. There may be more planning, earlier in the process, with a lot more people involved. Develop a two-track process – one for new applications or assets that are fairly standard (e.g., a business unit file server or basic web application) which can be more automated, and a second for larger programs such as major new applications. New security tools or policy changes: Adding a new security tool or policy change reverses the workflow, so the responsibility is now on the security team to initiate communications with network operations and other affected teams. Security should first analyze the change and potential downstream impacts, then work with teams to determine operational risks, timelines, and any other requirements. Conclusion Network security management isn’t easy, but there are more and less efficient ways to handle it. Knowing your posture and maintaining visibility are key, as are developing core workflows to bridge gaps between different operational teams. Network security operations monitors the environment and change requests to adapt the security posture as needed in a timely manner. It monitors for changes that slip through outside approved processes, develops workflows to handle the unexpected, and responds quickly when changes are requested to support other business areas. Finally, network security understands that security policy changes impact other operations, along with the need to analyze and communicate these potential implications. It is not always easy, but it is far more efficient and effective than the alternatives, and frees up the security team to focus on what they are best at. Share:

Share:
Read Post

Don’t Mess with Pen Test(ers)

Almost everyone you know is blissfully unaware of the digital footprints we all leave, and how that information can be used against us. The problem is that you understand, and if you spent much time thinking about it you’d probably lose your mind. So as a coping mechanism you choose not to think of how you could be attacked or how your finances could be wrecked, if targeted by the wrong person. Just in case you didn’t have enough to worry about today, you can check out this great first-person account of a personal pen test on Pando Daily. A NYU professor challenged the folks at Spider Labs to take a week and find out what they could about him. It wasn’t pretty. But then again, you knew that’s how the story would end. What I learned is that virtually all of us are vulnerable to electronic eavesdropping and are easy hack targets. Most of us have adopted the credo “security by obscurity,” but all it takes is a person or persons with enough patience and know-how to pierce anyone’s privacy – and, if they choose, to wreak havoc on your finances and destroy your reputation. The story details the team’s attempts to gain presence on his network and then devices. They finally went through the path of least resistance: his wife. The tactics weren’t overly sophisticated. But once armed with some basic information it was game over. The pen testers gained access to his bank accounts, brokerage information, phone records, and the like. What do we accomplish by reminding ourselves of the risks of today’s online life? Nothing. You know the risks. You take the risks. The benefits outweigh the risks. And now I’ll crawl back into my fog to become once again blissfully unaware. Share:

Share:
Read Post

Incite 10/30/2013: Managing the Details

  As I wrote a few weeks ago, everyone has their strengths. I know that managing the details is not one of mine. In fact I can’t stand it, which is very clear as we prepare for our oldest daughter’s Bat Mitzvah this weekend. It’s a right of passage signaling the beginning of adulthood. I actually view it as the beginning of the transformation to adulthood, which is a good way to look at it because many folks never complete that transition – at least judging from the way they behave. Coming back to the topic at hand, the sheer number of details to manage between the Friday night dinner, refreshments after the Friday service, the luncheon after the Saturday ceremony, the big party we’re throwing Saturday night, and the brunch on Sunday, are crazy. The Boss has mostly done nothing besides manage all those details for the past 6 months, and was immersed in the process for the year before that. I am thankful she shielded me from having to do much, besides lug some things from place to place and write a few (okay – a lot) of checks. We have many great friends who have helped out, and without them we would have been sunk. So many things have to be decided that you don’t even think about. Take lighting, for instance. Who cares about the lights? No one, unless the place is either too dark or too light. The proximity of the tables to the speakers? Yup, that needs to be managed because some folks have sensitive ears and can’t be too close to the dance floor. Who knew? The color of the tablecloths is important – it needs to match the seat covers and napkins. The one detail I did get involved in was the liquor. You can bet I was going to have a say in what kind of booze we had for the party. That’s a detail I can get my arms around. And I did. There will be Guinness. And it will be good. When we first went through the plans and the budget I was resistant. It’s hard to fathom spending the GNP of a small nation in one night. But as we get closer, I’m glad we are making a huge event. It’s very very rare that we get together with most of the people we care about to celebrate such a happy occasion. I can (and will) make more money, but I don’t know how many more opportunities I’ll have to share such happiness with my parents and in-laws. So I will enjoy this weekend. I’m not going to think about what it costs or how many webcasts I had to do to pay for it. I will be thankful that we are in a position where we can throw a big party to celebrate the fact that XX1 is growing up. I am going to appreciate all the work she put in to get ready to lead the services on Friday and Saturday. She has probably put in another 10-15 hours a week in preparation, on top of her schoolwork and rigorous dance schedule. She hasn’t slept much the past few weeks. It’s important that I savor the experience. I have been bad at that in the past. I will talk to all the people who traveled great distances to celebrate with us, and who I don’t get to see often. I’m going to smile. A lot. And lastly, I will follow Alan Shimel’s advice to not get so drunk I need to watch the video to remember what happened at the party. That’s probably the best piece of advice anyone could have given me. You don’t get many chances to see your baby girl in the spotlight. You might as well remember it. –Mike Photo credit: “Whiteboard of the now: The To-Do list” originally uploaded by Jenica Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Security Awareness Training Evolution Quick Wins Focus on Great Content Why Bother? Executive Guide to Network Security Management New Series: The Executive Guide to Pragmatic Network Security Management Defending Against Application Denial of Service Introduction Newly Published Papers Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U Stories make the point: Any effective communicator is a storyteller. People understand stories. Folks can find applicability to whatever situation they are in through a carefully crafted fable or analogy. When trying to create urgency for something as obscure as a malware attack (It does what? Why do I care about that?), it helps to have a way to relate it to non-security folks. The Analogies Project is a new initiative aiming to assemble a library of analogies about security that anyone can use to make specific points. I haven’t read them all, but a few were pretty good. Those of us in the business for a long time, and who communicate for a living, have a ton of stories from our travels through over years. But for those of you who don’t, there is bound to be an analogy that will resonate with the person you are trying to persuade. Check it out. – MR Who are you? Adrian and I have both been talking about different aspects of identity management in the cloud lately. Why should you care? Because if you don’t adopt some sort of federated identity option your life will be a screaming poopstorm of pain until the end of time. No, I’m not exaggerating. I can barely manage a dozen employee accounts on

Share:
Read Post

The Pragmatic Guide to Network Security Management: The Process

  This is part 2 in a series. Click here for part 1, or submit edits directly via GitHub. The Pragmatic Process As mentioned in the previous section, this process is designed primarily for more complex networks, and takes into account real-life organizational and technological complexities. Here is the outline, followed by the details: Know your network. Know your assets. Know your security. Map the topology. Prioritize and fix. Monitor continuously. Manage change and build workflows. The first five steps establish the baseline, and the next two manage the program, although you will need to periodically revisit previous steps to ensure your program stays up to date as the business evolves and risks change. Know Your Network You can’t secure what you don’t know, but effectively mapping a network topology – especially for a large network – can be daunting. Many organizations believe they have accurate network topologies, but they are rarely correct or complete – for all the reasons in the previous section. The most common problem is simply failure to keep up-to-date. Topology maps are produced occasionally as needed for audits or projects, but rarely maintained. The first step is to work with Network Operations to see what they have and how current it is. Aside from being politically correct, there is also no reason not to leverage what is already available. Position it as “We need to make sure we have our security in the right places,” rather than “We don’t trust you.” Once you get their data, evaluate it and decide how much you need to validate or extend it. There are a few ways to validate your network topology, and you should rely on automation when possible. Even if your network operations team provides a map or CMDB, you need to verify that it is current and accurate. One issue we see at times is that security uses a different toolset than network operations. Security scanners use a variety of techniques to probe the network and discover its structure, but standard security scanners (including vulnerability assessment tools) aren’t necessarily well suited to building out a complete network map. Network operations teams have their own mapping tools, some of which use similar scanning techniques, but add in routing and other analyses that rely on management-level access to the routers and network infrastructure. These tools tend to rely more on trusting the information provided to them and don’t probe as heavily as security tools. They also aren’t generally run organization-wide on a continuous basis, but are instead used as needed for problem-solving and planning. Know Your Assets Once you have a picture of the network you start evaluating the assets on it: servers, endpoints, and other hardware. Security tends to have better tools and experiences for scanning and analyzing assets than underlying network structure, especially for workstations. Depending on how mature you are at this point, either prioritize your scanning to particular network segments or use the information from the network map to target weak spots in your analysis. Endpoint tools such as configuration/patch management or endpoint protection platforms offer some information, but you also need to integrate a security scan (perhaps a vulnerability assessment) to identify problems. As before, this really needs to be a continuous process using automated tools. You also need a sense of the importance of the assets, especially in data centers, so you can prioritize defenses. This is a tough one, so make your best guesses if you have to – it doesn’t need to be perfect. Know Your Security You need to collect detailed information on three major pieces of network security: Base infrastructure security. This includes standard perimeter security, and anything you have deployed internally to enforce any kind of compartmentalization or detection. Think firewalls (including NGFW), intrusion detection, intrusion prevention, network forensics, Netflow feeds to your SIEM, and similar. Things designed primarily to protect the core network layer. Even network access control, for both of you using it. Extended security tools. These are designed to protect particular applications and activities, such as your secure mail gateway, web filter, web application firewalls, DLP, and other “layer 7” tools. Remote access. Security tends to be tightly integrated into VPNs and other remote access gateways. These aren’t always managed by security, but unlike network routers they have internal security settings that affect network access. For each component collect location and configuration. You don’t need all the deep particulars of a WAF or DLP (beyond what they are positioned to protect), but you certainly need complete details of base infrastructure tools. Yes, that means every firewall rule. Also determine how you manage and maintain each of those tools. Who is responsible? How do they manage it? What are the policies? Map the Topology This is the key step where you align your network topology, assets (focusing on bulk and critical analysis, not every single workstation), and existing security controls. There are two kinds of analysis to then perform: A management analysis to determine who manages all the security and network assets, and how. Who keeps firewall X up and running? How? Using which tool? Who manages the network hardware that controls the routing that firewall X is responsible for? Do you feed netflow data from this segment to the SIEM? IDS alerts? The objective is to understand the technical underpinnings of your network security management, and the meatspace mapping for who is responsible. A controls analysis to ensure the right tools are in the right places with the right configurations. Again, you probably want to prioritize this by assets. Do you use application-aware firewalls (NGFW) where you need them? Are firewalls configured correctly for the underlying network topology? Do you segment internal networks? Capture network traffic for detecting attacks in the right places? Are there network segments or locations that lack security controls because you didn’t know about them? Is that database really safe behind a firewall, is or it totally unprotected if a user clicks the wrong link in a phishing email?

Share:
Read Post

Thinking Small and Not Leading

Dave Elfering had a good post, making clear the difference between managing and leading. I thought my job as a security leader was to produce detailed policies that might as well have been detailed pseudo code executed by robots. If you are tasked with truly leading the security program for a company or organization then lead; quit trying to be a combination of the thought police and baby sitter. Detailed policies are necessary in some circumstances but overall they are unsustainable. Let’s dive back into the Army manual [Army Planning and Orders Production FM 5-0] for a moment. “Effective planning incorporates the concept of mission command… concentrates on the objective of an operation and not on every detail of how to achieve that objective.” I always talked about managing to outcomes when I had corporate jobs. I didn’t want to tell folks how to get things done. I just told them what needed to be done and figured they could figure it out. Mostly because half the time I wasn’t sure what to do, and the other half of the time I was too lazy to do it for them. Kidding aside, that’s how I learned the most. It’s not much different in security. You need to lead your security program with a light touch. Think big picture objectives, and as Dave says, managing intent. Not task lists, which is small thinking. You can’t make folks within the business do things – not over the long term, anyway. Hell, most of the time you can’t even make your own team do things. So you need to persuade them that it’s in their best interests to do so. So you need to lead, not just manage to the details, expecting your employee base to just get it. This is not easy. It’s usually easier to write the policy and become Dr. No. But that approach also means you’ll be looking for another job in the near term. More stuff they don’t teach you in any of those security certification classes, eh? Photo credit: “If you are not the lead dog your view never changes #grommet” originally uploaded by Nic Wise Share:

Share:
Read Post

New Series: The Executive Guide to Pragmatic Network Security Management

This is the first post in a new paper I’m writing. The entire paper is also posted on GitHub for direct feedback and suggestions. As an experiment, I prefer feedback on GitHub, but will also take it here, as usual. The Demise of Network Security Has Been Greatly Exaggerated DLP, IPS, NGFW, WAF. Chief Information Security Officers today suffer no shortage of network security tools to protect their environments, but most CISOs we talk with struggle to implement and maintain an effective network security program. They tell us it isn’t a lack of technologies or even necessarily resources (not that there are ever enough), but the inherent difficulties in defending a large, amorphous, business-critical asset with tendrils throughout the organization. It’s never as simple as magazine articles and conference presentations make it out to be. Managing network security at scale is not easy, but the organizations that do it the best tend to follow a predictable, repeatable pattern. This paper distills those lessons into a pragmatic process designed for larger organizations and those with more complicated networks (such as medium-sized businesses with multiple locations). We won’t make the false claim that our process is magical or easy, but it’s certainly easier than many alternatives. Even if you only pick out a few tidbits, it should help you refine and operate your network security more efficiently. The network is the aspect of our infrastructure that ties everything else together. The more we can do to efficiently and effectively secure it, the better. Why Network Security Is So Darn Difficult Networks and endpoints are the two most fundamental pieces of our IT infrastructure, yet despite decades of advancements they still consume a disproportionate amount of our security resources. First the good news – we are far more resilient to network attacks than even five years ago. The days of Internet-wide worms knocking down enterprises while script kiddies deface websites are mostly in the past. But every CISO knows establishing and maintaining network security is a constant challenge, even if they can’t always articulate why. We have narrowed down a handful of root causes, which this Pragmatic process is designed to address: Security and operations are divided. IT Operations is responsible for and manages the network, servers, endpoints, and applications, and information security is responsible for defending everything. Basically, security protects the enterprise from the outside – lacking insight into what is being protected, where it is, and how everything connects together. In many cases security doesn’t even know how all the pieces of the network are connected, but is still expected to manage firewall rules to protect it. Many of our recommendations are designed to bridge this divide without throwing away traditional organizational boundaries. Networks are dynamic and complex. Not only are new assets constantly joining and leaving the network, but its structure is never static, especially for larger organizations. Organic growth. All networks grow over time. Perhaps it’s a new office, extending a WiFi network, or an extra switch or router in the datacenter. Not all of these have major security implications but they add up over time. Mergers and acquisitions require blending resources, technologies, and different configurations. New technologies with different network requirements are constantly added, from a new remote access portal to an entire private cloud. We mix and match various security tools, often with overlapping functionality. This is sometimes a result of different branches of the company operating partially or completely autonomously, and other times results from turnover, project requirements, or keeping auditors happy. Needs change over time. Many organizations today are working on consolidating network perimeters, compartmentalizing internal networks, adding application awareness, expanding egress monitoring and filtering for breach and infection defenses, or adapting the network for cloud computing and eventually SDN. Network and network security technologies evolve to meet new business needs and evolving threats. Our networks are large and complex, sometimes even when our organizations aren’t. They change constantly, as do the assets connected to them. Security doesn’t manage this infrastructure, but is tasked with protecting it. Network Security Management is about improving both security and efficiency to keep up. From Blocking and Tackling to Integrated Defense Our primary goal is to adopt processes that are flexible enough to account for an ever-changing network environment, while avoiding the constant firefighting that is so inefficient. The key isn’t any particular technology or security trick, but better integrating defenses into day-to-day management of the enterprise. What makes it pragmatic? The fact that the process is designed to work in the real world, without gutting or stumbling over organizational and bureaucratic divisions. We get it – even if you are the CEO, there are limits to change. We have collected the best practices we have seen work in the real world, lining them up in a practical and achievable process that accounts for real-world restrictions. Our next sections will dig into the process. As we said earlier, pick and choose those which work for you. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.