Securosis

Research

How I got a CISSP and ended up nominated for the Board of Directors

About two years ago I was up in Toronto having dinner with James Arlen and Dave Lewis (@myrcurial and @gattaca). Since Dave was serving on the (ISC)2 Board of Directors, and James and I were not CISSPs, the conversation inevitably landed on our feelings as to the relative value of the organization and the certifications. I have been mildly critical of the CISSP for years. Not rampant hatred, but more an opinion that the cert didn’t achieve its stated goals. It had become less an educational tool, and more something to satisfy HR departments. Not that there is anything inherently wrong with looking for certifications. As an EMT, and a former paramedic, I’ve held at least a dozen or more medical, firefighting, and rescue certifications in my career. Some of them legally required for the job. (No, I don’t think we can or should do the same for security, but that’s fodder for another day). While I hadn’t taken the CISSP test, I did once, over a decade earlier, take a week class and look at becoming certified. I was at Gartner at the time and the security team only had one CISSP. So I was familiar with the CBK, which quickly disillusioned me. It barely seemed to reflect the skills base that current, operational security professionals needed. It wasn’t all bad, it just wasn’t on target. Then I looked at the ethics requirements, which asked if you ever “associated with hackers”. Now I know they meant “criminals” but that isn’t what was on paper, and, to me, that is the kind of mistake that reflects a lack of understanding as to the power of words. Or even the meaning of the word, and from an organizations that represents the very profession most directly tied to the hacker community. Out of touch content and a poorly written code of ethics wasn’t something I felt I needed to support, and thanks to where I was in my career I didn’t need it. To be honest, James and I teamed up a bit on Dave that night. Asking him why he would devote so much time to an organization he, as a hacker, technically couldn’t even be a part of. That’s right about the time he told us to put up or shut up. You see Dave helped get the code of ethics updated and had that provision removed. And he, and other board members, had launched a major initiative to update the exam and the CBK. He challenged us to take the test, THEN tell him what we thought. (He had us issued tokens, so we didn’t pay for the exam). He saw the (ISC)2 not merely as a certification entity, but as a professional organization with a membership and position to actually advance the state of the profession, with the right leadership (and support of the members). James and I each later took the exam (nearly a year later in my case). James and I each approached the exam differently – he studied, I went in cold. Then we sent feedback on our experience to Dave to pass on to the organization. We wanted to see if the content was representative of what security pros really need to know to get their jobs done. While I can’t discuss the content, it was better than I expected, but still not where I thought it needed to be. (There was one version back from the current exam). Over that time additional friends and people I respect joined the Board, and continued to steer the (ISC)2 in interesting directions. I never planned on actually getting my CISSP. It really isn’t something I needed at this point in my career. But the (ISC)2 and the Cloud Security Alliance had recently teamed up on a new certification that was directly tied to the CCSK we (Securosis) manage for the CSA, and I was gently pressured to become more involved in the relationship and course content. Plus, my friends in the (ISC)2 made a really important, personally impactful point. As a profession we face the greatest social, political, and operational challenges since our inception. Every day we are in the headlines, called before lawmakers, and fighting bad guys and, at times, our own internal political battles. But our only representation, speaking in our name, is lone individuals and profit-oriented companies. The (ISC)2 is potentially positioned to play a very different role. It’s not for profit, run by directors chosen in open elections. The people I knew who were active in the organization saw the chance, see the chance, for it to continue to evolve into something more than a certification shop. I submitted my paperwork. Then, the same day I was issued my certification, I found out I was nominated for the Board. Sorta didn’t really expect that. Accepting wasn’t a simple decision. I already travel a lot, and had to talk it over with my wife and coworkers (both of whom advised me not to do it, due to the time commitment). But something kept nagging at me. We really do need a voice. An organization with the clout and backing to represent the profession. Now I fundamentally don’t believe any third party can ever represent all the opinions of any constituency. I sure as hell have no right to assume I speak for everyone with ‘security’ in their title, but without some mutual agreement all that will happen is those with essentially no understanding of what we do will make many of the decisions that decide our future. That’s why I’m running for the Board of the (ISC)2. Because to play that role, the organization needs to continue to change. It needs to become more inclusive, with a wider range of certification and membership options, which better reflect operational security needs. It should also reach out more to a wider range of the community, particularly researchers, offensive security professionals, and newer, less experienced security pros. It needs to actually offer them something; something more than a piece of paper that will help their resume

Share:
Read Post

Chewie, We’re Home

Every week, we here at Securosis like to highlight the security industry’s most important news in our Friday Summary. Those events that not only made the press, but are likely to significantly impact your professional lives and, potentially, the well-being of the organization you work for. Ah, who am I kidding, let’s talk Star Wars. If you didn’t know a new trailer for The Force Awakens was released this week, you can’t be reading this statement, because you are either deceased (like a parrot) or currently imprisoned in an underground bunker by a religious fanatic who is feeding you nutritional supplements so he/she can harvest your organs and live for eternity. I can’t imagine any other legitimate options. Stick with me for a minute – I really do have a point or two. Like many of you, Star Wars played an incredibly influential role in my life. The first film hit when I was six, and it helped form the person I would eventually become. I know, cheesy and maybe weird or nerdy, but as children we all grab onto stories and metaphor to develop our own worldview. For some of you it was religion (that is pretty much the purpose of the Bible), or a book series, or a blend of influences. For me, Star Wars always stood far above and beyond anything else outside the direct guidance of my parents. Martial arts, public service, a love of aviation and space, and a fundamental recognition of the importance of helping and protecting others all trace back, to some degree, to the film series. Perhaps I would have grabbed onto these principles anyway, but at this point that experiment’s control group vaporized decades ago. I have, perhaps, an overconfidence in the new film. I’ve already bought tickets for opening night and the following day, and could only stop tearing up at the trailer through intense immersion therapy. Unlimited bandwidth FTW. There was a fascinating article in the New Yorker this week. The author admitted a love for the original trilogy, but claimed now that we are adults, there is no chance for a new entry to create the same wonder as the originals did for thousands (millions?) of children in theaters. That the new films must, of necessity, be for children, as adults are no longer of generating such emotions. You know, pretty much what you would expect The New Yorker to publish. The day I no longer believe a story can make me feel wonder is the day I ask Reverend Billy to finally remove my dead heart and implant it in that goat that makes our cheese (in the bunker, keep up people). Maybe the new film won’t hit that lofty goal (although the trailer sure did), but you can’t close your mind to the possibility. Okay, maybe Star Wars isn’t your thing, but if you no longer believe stories even have the potential to engender childlike joy, that’s a loss of hope with profound personal implications. I’m also fascinated to see how Star Wars changes for my children. Already the expanded universe is creating a different relationship with the canon. Growing up I only had Artoo and Threepio, but they now have Chopper (from Rebels, a really great show) and BB-8. My two year old is already obsessed with BB-8 and insists my Sphero toy sit next to him when he watches TV. When the battery runs out he likes to tell me “BB-8 sad”. They will never experience things the way I did. Maybe they’ll love it, maybe they won’t, that’s up for them to decide (after my meddling influence). But there is one aspect of the new films that, as a parent, endlessly excites me. The prequels weren’t merely bad films, they did nearly nothing to advance the story. They gave us the visuals of the history of Vader, and a few poorly retconned story beats, but they didn’t tell us anything material we didn’t already know. There was no anticipation between the films, not like when Empire came out and my friends and I spent 3 years debating if Darth was Luke’s father, or if it was merely another Sith lie. In two months we get to see an entirely new Star Wars that continues the story that started nearly 40 years ago. And, though I’m really just guessing here, I’m pretty sure Episode VII is going to end in a cliffhanger that won’t be resolved for another two to three years, if not the full six years to finish this next trilogy. My children will get a new story that will play out over a third of their childhood. Not some movies based on some existing books, however well written and popular. Not a television series they see every week or can marathon on Netflix. Three films. Six years. So popular (just a guess) that they extend Star Wars’ already deep influence in our global consciousness. The ending unknown until my entire family, the youngest now eight or nine (not two), the oldest bordering on a teenager, sits together in the theater as the lights dim, the curtain peels back, and the familiar fanfare blasts from the speakers. No, maybe I won’t ever feel the same as that day in 1977 when I sat next to my father and that first Star Destroyer loomed above our heads. I’m older, capable of far more emotional depth, with an ever greater need to escape the responsibilities of adulthood and the painful irrationality of the real world. Knowing that my children sitting next to me are building their own memories, and are experiencing their own wonder. It’s going to be so much better. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian was on a webinar on building secure software Securosis Posts Incite 10/21/2015: Appreciating the Classics. re:Invent Yourself (or else). Favorite Outside Posts Adrian: The MySpace Worm That Changed The Internet Forever Re-coding someone else’s site for fun – and unintentionally releasing the SAMY worm. Great story. Mike: Threat Intellgence-driven Risk Analysis –

Share:
Read Post

Incite 10/21/2015: Appreciating the Classics

It has been a while since I’ve mentioned my gang of kids. XX1, XX2 and the Boy are alive and well, despite the best efforts of their Dad. All of them started new schools this year, with XX1 starting high school (holy crap!) and the twins starting middle school. So there has been a lot of adjustment. They are growing up and it’s great to see. It’s also fun because I can start to pollute them with the stuff that I find entertaining. Like classic comedies. I’ve always been a big fan of Monty Python, but that wasn’t really something I could show an 8-year-old. Not without getting a visit from Social Services. I knew they were ready when I pulled up a YouTube of the classic Mr. Creosote sketch from The Meaning of Life, and they were howling. Even better was when we went to the FroYo (which evidently is the abbreviation for frozen yogurt) place and they reminded me it was only a wafer-thin mint.   I decided to press my luck, so one Saturday night we watched Monty Python and the Holy Grail. They liked it, especially the skit with the Black Knight (It’s merely a flesh wound!). And the ending really threw them for a loop. Which made me laugh. A lot. Inspired by that, I bought the Mel Brooks box set, and the kids and I watched History of the World, Part 1, and laughed. A lot. Starting with the gorilla scene, we were howling through the entire movie. Now at random times I’ll be told that “it’s good to be the king!” – and it is. My other parenting win was when XX1 had to do a project at school to come up with a family shield. She was surprised that the Rothman clan didn’t already have one. I guess I missed that project in high school. She decided that our family animal would be the Honey Badger. Mostly because the honey badger doesn’t give a _s**t_. Yes, I do love that girl. Even better, she sent me a Dubsmash, which is evidently a thing, of her talking over the famous Honey Badger clip on YouTube. I was cracking up. I have been doing that a lot lately. Laughing, that is. And it’s great. Sometimes I get a little too intense (yes, really!) and it’s nice to have some foils in the house now, who can help me see the humor in things. Even better, they understand my sarcasm and routinely give it right back to me. So I am training the next generation to function in the world, by not taking themselves so seriously, and that may be the biggest win of all. –Mike Photo credit: “Horse Laugh” originally uploaded by Bill Gracey Thanks to everyone who contributed to my Team in Training run to battle blood cancers. We’ve raised almost $6,000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Building Security into DevOps The Role of Security in DevOps Tools and Testing in Detail Security Integration Points The Emergence of DevOps Introduction Building a Threat Intelligence Program Using TI Gathering TI Introduction Network Security Gateway Evolution Introduction Recently Published Papers Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development The Future of Security Incite 4 U The cloud poster child: As discussed in this week’s FireStarter, the cloud is happening faster than we expected. And that means security folks need to think about things differently. As if you needed more confirmation, check out this VentureBeat profile of Netflix and their movement towards shutting down their data centers to go all Amazon Web Services. The author of the article calls this the future of enterprise tech and we agree. Does that mean existing compute, networking, and storage vendors go away? Not overnight, but in 10-15 years infrastructure will look radically different. Radically. But in the meantime, things are happening fast, and folks like Netflix are leading the way. – MR Future – in the past tense: TechCrunch recently posted The Future of Coding Is Here, outlining how the arrival of APIs (Application Programming Interfaces) has ushered in a new era of application development. The fact is that RESTful APIs have pretty much been the lingua franca of software development since 2013, with thousands of APIs available for common services. By the end of 2013 every major API gateway vendor had been acquired by a big IT company. That was because APIs are an enabling

Share:
Read Post

re:Invent Yourself (or else)

A bit over a week ago we were all out at Amazon’s big cloud conference, which is now up to 19,000 attendees. Once again it got us thinking as to how quickly the world is changing, and the impact it will have on our profession. Now that big companies are rapidly adopting public cloud (and they are), that change is going to hit even faster than ever before. In this episode the Securosis team lays out some of what that means, and how now is the time to get on board. Watch or listen: Share:

Share:
Read Post

It’s a Developer’s World Now

Last week Mike, Adrian, and myself were out at the Amazon re:Invent conference. It’s the third year I’ve attended and it’s become one of the core events of the year for me; even more important than most of the security events. To put things in perspective, there were over 19,000 attendees and this is only the fourth year of the conference. While there I tweeted that all security professionals need to get their asses to some non-security conferences. Specifically, to cloud or DevOps events. It doesn’t need to be Amazon’s show, but certainly needs to be one from either a major public cloud provider (and really, only Microsoft and Google are on that list right now), or something like the DevOps Enterprise Summit next week (which I have to miss). I always thought cloud and automation in general, and public cloud and DevOps (once I learned the name) in particular, would become the dominant operational model and framework for IT. What I absolutely underestimated is how friggen fast the change would happen. We are, flat out, three years ahead of my expectations, in terms of adoption. Nearly all my hallway conversations at re:Invent this year were with large enterprises, not the startups and mid-market of the first year. And we had plenty of time for those conversations, since Amazon needs to seriously improve their con traffic management. With cloud, our infrastructure is now software defined. With DevOps (defined as a collection of things beyond the scope of this post), our operations also become software defined (since automation is essential to operating in the cloud). Which means, well, you know what this means… We live in a developer’s world. This shouldn’t be any sort of big surprise. IT always runs through phases where one particular group is relatively “dominant” in defining our enterprise use of technology. From mainframe admins, to network admins, to database admins, we’ve circled around based on which pieces of our guts became most-essential to running the business. I’m on record as saying cloud computing is far more disruptive than our adoption of the Internet. The biggest impact on security and operations is this transition to software defined everything. Yes, somewhere someone still needs to wire the boxes together, but it won’t be most of the technology workforce. Which means we need to internalize this change, and start understanding the world of those we will rely on to enable our operations. If you aren’t a programmer, you need to get to know them, especially since the tools we typically rely on are moving much more slowly than the platforms we run everything on. One of the best ways to do this is to start going to some outside (of security) events. And I’m dead serious that you shouldn’t merely go to a cloud or DevOps track at a security conference, but immerse yourself at a dedicated cloud or DevOps show. It’s important to understand the culture and priorities, not merely the technology or our profession’s interpretation of it. Consider it an intelligence gathering exercise to learn where the rest of your organization is headed. I’m sure there’s an appropriate Sun Tsu quote out there, but if I used it I’d have to nuke this entire site and move to a security commune in the South Bay. Or Austin. I hear Austin’s security scene is pretty hot. Oh- and, being Friday, I suppose I should insert the Friday Summary below and save myself a post. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences A bunch of stuff this week, but the first item, Mike’s keynote, is really the one to take a look at. Mike’s HouSecCon keynote. Rich at GovInfoSecurity on the AWS not-a-hack. Adrian at CSO on why merchants are missing the EMV deadlines. Rich at the Daily Herald on Apple’s updated privacy site. Rich at Macworld/IDG on the “uptick” of OS X malware. TL;DR, it’s still less than new Windows malware created every hour. Rich, again on Apple privacy. This time at the Washington Post. Rich on Amazon’s new Inspector product over at Threatpost And one last Apple security story with Rich. This time over at Wired, on iOS malware. Recent Securosis Posts Building Security Into DevOps: The Role of Security in DevOps. Building a Threat Intelligence Program: Using TI. Building Security Into DevOps: Tools and Testing in Detail. New Report: Pragmatic Security for Cloud and Hybrid Networks. Building Security Into DevOps: Security Integration Points. Pragmatic Security for Cloud and Hybrid Networks: Design Patterns. Pragmatic Security for Cloud and Hybrid Networks: Building Your Cloud Network Security Program. Favorite Outside Posts Mike: US taxman slammed: Half of the IRS’s servers still run doomed Windows Server 2003. Uh, how do you lose 1300 devices? Chris Pepper: How is NSA breaking so much crypto? Rich: Teller Reveals His Secrets. As in the Penn and Teller. I’ve always loved magic, especially since I realized it is a pure form of science codified over thousands of years. So is con artistry, BTW. Dave Lewis: [What’s Holding Back the Cyber Insurance Industry? A Lack of Solid Data](7 http://www.nextgov.com/cybersecurity/2015/10/whats-holding-back-cyber-insurance-industry-lack-solid-data/122790/?oref=NextGovTCO). Research Reports and Presentations Pragmatic Security for Cloud and Hybrid Networks. EMV Migration and the Changing Payments Landscape. Network-based Threat Detection. Applied Threat Intelligence. Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Top News and Posts Beware of Oracle’s licensing ‘traps,’ law firm warns Chip & PIN Fraud Explained – Computerphile Hacker Who Sent Me Heroin Faces Charges in U.S. Troy’s ultimate list of security links Summary of the Amazon DynamoDB Service Disruption and Related Impacts in the US-East Region Emergency Adobe Flash Update Coming Next Week Researchers Find 85 Percent of Android Devices Insecure Share:

Share:
Read Post

Building Security Into DevOps: The Role of Security in DevOps

In today’s post I am going to talk about the role of security folks in DevOps. A while back we provided a research paper on Putting Security Into Agile Development; the feedback we got was the most helpful part of that report was guiding security people on how best to work with development. How best to position security in a way to help development teams be more Agile was successful, so this portion of our research on DevOps we will strive to provide similar examples of the role of security in DevOps. There is another important aspect we feel frames today’s discussion; there really is no such thing as SecDevOps. The beauty of DevOps is security becomes part of the operational process of integrating and delivering code. We don’t call security out as a separate thing because it is actually not separate, but (can be) intrinsic to the DevOps framework. We want security professionals to keep this in mind when consider how they fit within this new development framework. You will need to play one or more roles in the DevOps model of software delivery, and need to look at how you improve the delivery of secure code without waste nor introducing bottlenecks. The good news is that security fits within this framework nicely, but you’ll need to tailor what security tests and tools fit within the overall model your firm employs. The CISO’s responsibilities Learn the DevOps process: If you’re going to work in a DevOps environment, you’re going to need to understand what it is, and how it works. You need to understand what build servers do, how test environments are built up, the concept of fully automated work environments, and what gates each step in the process. Find someone on the team and have them walk you through the process and introdyce the tools. Once you understand the process the security integration points become clear. Once you understand the mechanics of the development team, the best way to introduce different types of security testing also become evident. Learn how to be agile: Your participation in a DevOps team means you need to fit into DevOps, not the other way around. The goal of DevOps is fast, faster, fastest; small iterative changes that offer quick feedback. You need to adjust requirements and recommendations so they can be part of the process, and as hand-off and automated as possible. If you’re going to recommend manual code reviews or fuzz testing, that’s fine, but you need to understand where these tests fit within the process, and what can – or cannot – gate a release. How CISOs Support DevOps Training and Awareness Educate: Our experience shows one of the best ways to bring a development team up to speed in security is training; in-house explanations or demonstrations, 3rd party experts to help threat model an application, eLearning, or courses offered by various commercial firms. The downside of this historically has been cost, with many classes costing thousands of dollars. You’ll need to evaluate how best to use your resources, which usually includes some eLearning for all employees to use, and having select people attend a class then teach their peers. On site experts can also be expensive, but you can have an entire group participate in training. Grow your own support: Security teams are typically small, and often lack budget. What’s more is security people are not present in many development meetings; they lack visibility in day to day DevOps activities. To help extend the reach of the security team, see if you can get someone on each development team to act as an advocate for security. This helps not only extend the reach of the security team, but also helps grow awareness in the development process. Help them understand threats: Most developers don’t fully grasp how attacks approach attacking a system, or what it means when a SQL injection attack is possible. The depth and breadth of security threats is outside their experience, and most firms do not teach threat modeling. The OWASP Top Ten is a good guide to the types of code deficiencies that plague development teams, but map these threats back to real world examples, and show the extent of damage that can occur from a SQL Injection attack, or how a Heartbleed type vulnerability can completely expose customer credentials. Real world use cases go a long way to helping developer and IT understand why protection from certain threats is critical to application functions. Advise Have a plan: The entirety of your security program should not be ‘encrypt data’ or ‘install WAF’. It’s all too often that developers and IT has a single idea as to what constitutes security, and it’s centered on a single tool they want to set and forget. Help build out the elements of the security program, including both in-code enhancements as well as supporting tools, and how each effort helps address specific threats. Help evaluate security tools: It’s common for people outside of security to not understand what security tools do, or how they work. Misconceptions are rampant, and not just because security vendors over-promise capabilities, but it’s uncommon for developers to evaluate code scanners, activity monitors or even patch management systems. In your role as advisor, it’s up to you to help DevOps understand what the tools can provide, and what fits within your testing framework. Sure, you may not be able to evaluate the quality of the API, but you can certainly tell when a product does not actually deliver meaningful results. Help with priorities: Not every vulnerability is a risk. And worse, security folks have a long history of sounding like the terrorism threat scale, with vague warnings about ‘severe risk’ or ‘high threat levels’. None of these warnings are valuable without mapping the threat to possible exploitations, or what you can do to address – or reduce – the risks. For example, an application may have a critical vulnerability, but you have options of fixing it in the code,

Share:
Read Post

Building a Threat Intelligence Program: Using TI

As we dive back into the Threat Intelligence Program, we have summarized why a TI program is important and how to (gather intelligence. Now we need a programmatic approach for using TI to improve your security posture and accelerate your response & investigation functions. To reiterate (because it has been a few weeks since the last post), TI allows you to benefit from the misfortune of others, meaning it’s likely that other organizations will get hit with attacks before you, so you should learn from their experience. Like the old quote, “Wise men learn from their mistakes, but wiser men learn from the mistakes of others.” But knowing what’s happened to others isn’t enough. You must be able to use TI in your security program to gain any benefit. First things first. We have plenty of security data available today. So the first step in your program is to gather the appropriate security data to address your use case. That means taking a strategic view of your data collection process, both internally (collecting your data) and externally (aggregating threat intelligence). As described in our last post, you need to define your requirements (use cases, adversaries, alerting or blocking, integrating with monitors/controls, automation, etc.), select the best sources, and then budget for access to the data. This post will focus on using threat intelligence. First we will discuss how to aggregate TI, then on using it to solve key use cases, and finally on tuning your ongoing TI gathering process to get maximum value from the TI you collect. Aggregating TI When aggregating threat intelligence the first decision is where to put the data. You need it somewhere it can be integrated with your key controls and monitors, and provide some level of security and reliability. Even better if you can gather metrics regarding which data sources are the most useful, so you can optimize your spending. Start by asking some key questions: To platform or not to platform? Do you need a standalone platform or can you leverage an existing tool like a SIEM? Of course it depends on your use cases, and the amount of manipulation & analysis you need to perform on your TI to make it useful. Should you use your provider’s portal? Each TI provider offers a portal you can use to get alerts, manipulate data, etc. Will it be good enough to solve your problems? Do you have an issue with some of your data residing in a TI vendor’s cloud? Or do you need the data to be pumped into your own systems, and how will that happen? How will you integrate the data into your systems? If you do need to leverage your own systems, how will the TI get there? Are you depending on a standard format like STIX/TAXXI? Do you expect out-of-the-box integrations? Obviously these questions are pretty high-level, and you’ll probably need a couple dozen follow-ups to fully understand the situation. Selecting the Platform In a nutshell, if you have a dedicated team to evaluate and leverage TI, have multiple monitoring and/or enforcement points, or want more flexibility in how broadly you use TI, you should probably consider a separate intelligence platform or ‘clearinghouse’ to manage TI feeds. Assuming that’s the case, here are a few key selection criteria to consider when selecting a stand-alone threat intelligence platforms: Open: The TI platform’s task is to aggregate information, so it must be easy to get information into it. Intelligence feeds are typically just data (often XML), and increasingly distributed in industry-standard formats such as STIX, which make integration relatively straightforward. But make sure any platform you select will support the data feeds you need. Be sure you can use the data that’s important to you, and not be restricted by your platform. Scalable: You will use a lot of data in your threat intelligence process, so scalability is essential. But computational scalability is likely more important than storage scalability – you will be intensively searching and mining aggregated data, so you need robust indexing. Unfortunately scalability is hard to test in a lab, so ensure your proof of concept testbed is a close match for your production environment, and that you can extrapolate how the platform will scale in your production environment. Search: Threat intelligence, like the rest of security, doesn’t lend itself to absolute answers. So make TI the beginning of your process of figuring out what happened in your environment, and leverage the data for your key use cases as we described earlier. One clear requirement for all use cases is search. Be sure your platform makes searching all your TI data sources easy. Scoring: Using Threat Intelligence is all about betting on which attackers, attacks, and assets are most important to worry about, so a flexible scoring mechanism offers considerable value. Scoring factors should include assets, intelligence sources, and attacks, so you can calculate a useful urgency score. It might be as simple as red/yellow/green, depending on the sophistication of your security program. Key Use Cases Our previous research has focused on how to address these key use cases, including preventative controls (FW/IPS), security monitoring, and incident response. But a programmatic view requires expanding the general concepts around use cases into a repeatable structure, to ensure ongoing efficiency and effectiveness. The general process to integrate TI into your use cases is consistent, with some variations we will discuss below under specific use cases. Integrate: The first step is to integrate the TI into the tools for each use case, which could be security devices or monitors. That may involve leveraging the management consoles of the tools to pull in the data and apply the controls. For simple TI sources such as IP reputation, this direct approach works well. For more complicated data sources you’ll want to perform some aggregation and analysis on the TI before updating rules running on the tools. In that case you’ll expect your TI platform for integrate with the tools. Test and Trust: The key concept here is trustable automation. You want to make sure any rule changes driven by TI go

Share:
Read Post

Building Security Into DevOps: Tools and Testing in Detail

Thus far I’ve been making the claim that security can be woven into the very fabric of your DevOps framework; now it’s time to show exactly how. DevOps encourages testing at all phases in the process, and the earlier the better. From the developers desktop prior to check-in, to module testing, and against a full application stack, both pre and post deployment – it’s all available to you. Where to test Unit testing: Unit testing is nothing more than running tests again small sub-components or fragments of an application. These tests are written by the programmer as they develop new functions, and commonly run by the developer prior to code checkin. However, these tests are intended to be long-lived, checked into source repository along with new code, and run by any subsequent developers who contribute to that code module. For security, these be straightforward tests – such as SQL Injection against a web form – to more complex attacks specific to the function, such as logic attacks to ensure the new bit of code correctly reacts to a users intent. Regardless of the test intent, unit tests are focused on specific pieces of code, and not systemic or transactional in nature. And they intended to catch errors very early in the process, following the Deming ideal that the earlier flaws are identified, the less expensive they are to fix. In building out your unit tests, you’ll need to both support developer infrastructure to harness these tests, but also encourage the team culturally to take these tests seriously enough to build good tests. Having multiple team member contribute to the same code, each writing unit tests, helps identify weaknesses the other did not not consider. Security Regression tests: A regression test is one which validates recently changed code still functions as intended. In a security context this it is particularly important to ensure that previously fixed vulnerabilities remain fixed. For DevOps regression tests can are commonly run in parallel to functional tests – which means after the code stack is built out – but in a dedicated environment security testing can be destructive and cause unwanted side-effects. Virtualization and cloud infrastructure are leveraged to aid quick start-up of new test environments. The tests themselves are a combination of home-built test cases, created to exploit previously discovered vulnerabilities, and supplemented by commercial testing tools available via API for easy integration. Automated vulnerability scanners and dynamic code scanners are a couple of examples. Production Runtime testing: As we mentioned in the Deployment section of the last post, many organizations are taking advantage of blue-green deployments to run tests of all types against new production code. While the old code continues to serves user requests, new code is available only to select users or test harnesses. The idea is the tests represent a real production environment, but the automated environment makes this far easier to set up, and easier to roll back in the event of errors. Other: Balancing thoroughness and timelines is a battle for most organization. The goal is to test and deploy quickly, with many organizations who embrace CD releasing new code a minimum of 10 times a day. Both the quality and depth of testing becomes more important issue: If you’ve massaged your CD pipeline to deliver every hour, but it takes a week for static or dynamic scans, how do you incorporate these tests? It’s for this reason that some organizations do not do automated releases, rather wrap releases into a ‘sprint’, running a complete testing cycle against the results of the last development sprint. Still others take periodic snap-shops of the code and run white box tests in parallel, but do gate release on the results, choosing to address findings with new task cards. Another way to look at this problem, just like all of your Dev and Ops processes will go through iterative and continual improvement, what constitutes ‘done’ in regards to security testing prior to release will need continual adjustment as well. You may add more unit and regression tests over time, and more of the load gets shifted onto developers before they check code in. Building a Tool Chain The following is a list of commonly used security testing techniques, the value they provide, and where they fit into a DevOps process. Many of you reading this will already understand the value of tools, but perhaps not how they fit within a DevOps framework, so we will contrast traditional vs. DevOps deployments. Odds are you will use many, if not all, of these approaches; breadth of testing helps thoroughly identify weaknesses in the code, and better understand if the issues are genuine threats to application security. Static analysis: Static Application Security Testing (SAST) examine all code – or runtime binaries – providing a thorough examination for common vulnerabilities. These tools are highly effective at finding flaws, often within code that has been reviewed manually. Most of the platforms have gotten much better at providing analysis that is meaningful to developers, not just security geeks. And many are updating their products to offer full functionality via APIs or build scripts. If you can, you’ll want to select tools that don’t require ‘code complete’ or fail to offer APIs for integration into the DevOps process. Also note we’ve seen a slight reduction in use as these tests often take hours or days to run; in a DevOps environment that may eliminate in line tests as a gate to certification or deployment. Most organizations, as we mentioned in the above section labelled ‘Other’, teams are adjusting to out of band testing with static analysis scanners. We highly recommend keeping SAST testing as part of the process and, if possible, are focused on new sections of code only to reduce the duration of the scan. Dynamic analysis: Dynamic Application Security Testing (DAST), rather than scan code or binaries as SAST tools above, dynamically ‘crawl’ through an application’s interface, testing how the application reacts to inputs. While these scanners do not see what’s going

Share:
Read Post

New Report: Pragmatic Security for Cloud and Hybrid Networks

This is one of those papers I’ve been wanting to write for a while. When I’m out working with clients, or teaching classes, we end up spending a ton of time on just how different networking is in the cloud, and how to manage it. On the surface we still see things like subnets and routing tables, but now everything is wired together in software, with layers of abstraction meant to look the same, but not really work the same. This paper covers the basics and even includes some sample diagrams for Microsoft Azure and Amazon Web Services, although the bulk of the paper is cloud-agnostic.   From the report: Over the last few decades we have been refining our approach to network security. Find the boxes, find the wires connecting them, drop a few security boxes between them in the right spots, and move on. Sure, we continue to advance the state of the art in exactly what those security boxes do, and we constantly improve how we design networks and plug everything together, but overall change has been incremental. How we think about network security doesn’t change – just some of the particulars. Until you move to the cloud. While many of the fundamentals still apply, cloud computing releases us from the physical limitations of those boxes and wires by fully abstracting the network from the underlying resources. We move into entirely virtual networks, controlled by software and APIs, with very different rules. Things may look the same on the surface, but dig a little deeper and you quickly realize that network security for cloud computing requires a different mindset, different tools, and new fundamentals. Many of which change every time you switch cloud providers. Special thanks to Algosec for licensing the research. As usual everything was written completely independently using our Totally Transparent Research process. It’s only due to these licenses that we are able to give this research away for free. The landing page for the paper is here. Direct download: Pragmatic Security for Cloud and Hybrid Networks (pdf) Share:

Share:
Read Post

Building Security Into DevOps: Security Integration Points

A couple housekeeping items before I begin today’s post – we’ve had a couple issues with the site so I apologize if you’ve tried to leave comments but could not. We think we have that fixed. Ping us if you have trouble. Also, I am very happy to announce that Veracode has asked to license this research series on integrating security into DevOps! We are very happy to have them onboard for this one. And it’s support from the community and industry that allows us to bring you this type of research, and all for free and without registration. For the sake of continuity I’ve decided to swap the order of posts from our original outline. Rather than discuss the role of security folks in a DevOps team, I am going to examine integration of security into code delivery processes. I think it will make more sense, especially for those new to DevOps, to understand the technical flow and how things fit together before getting a handle on their role. The Basics Remember that DevOps is about joining Development and Operations to provide business value. The mechanics of this are incredibly important as it helps explain how the two teams work together, and that is what I am going to cover today. Most of you reading this will be familiar with the concept of ‘nightly builds’, where all code checked in the previous day would be compiled overnight. And you’re just as familiar with the morning ritual of sipping coffee while you read through the logs to see if the build failed, and why. Most development teams have been doing this for a decade or more. The automated build is the first of many steps that companies go through on their way towards full automation of the processes that support code development. The path to DevOps is typically done in two phases: First with continuous integration, which manages the building an testing of code, and then continuous deployment, which assembles the entire application stack into an executable environment. Continuous Integration The essence of Continuous Integration (CI) is where developers check in small iterative advancements to code on a regular basis. For most teams this will involve many updates to the shared source code repository, and one or more ‘builds’ each day. The core idea is smaller, simpler additions where we can more easily – and more often – find defects in the code. Essentially these are Agile concepts, but implemented in processes that drive code instead of processes that drive people (e.g.: scrums, sprints). Definition of CI has morphed slightly over the last decade, but in context to DevOps, CI also implies that code is not only built and integrated with supporting libraries, but also automatically dispatched for testing as well. And finally CI in a DevOps context also implies that code modifications will not be applied to a branch, but into the main body of the code, reducing complexity and integration nightmares that plague development teams. Conceptually, this sounds simple, but in practice it requires a lot of supporting infrastructure. It means builds are fully scripted, and the build process occurs as code changes are made. It means upon a successful build, the application stack is bundled and passed along for testing. It means that test code is built prior to unit, functional, regression and security testing, and these tests commence automatically when a new bundle is available. It also means, before tests can be launched, that test systems are automatically provisioned, configured and seeded with the necessary data. And these automation scripts must provide monitored for each part of the process, and that the communication of success or failure is sent back to Dev and Operations teams as events occur. The creation of the scripts and tools to make all this possible means operations, testing and development teams to work closely together. And this orchestration does not happen overnight; it’s commonly an evolutionary process that takes months to get the basics in place, and years to mature. Continuous Deployment Continuous Deployment looks very similar to CI, but is focused on the release – as opposed to build – of software to end users. It involves a similar set of packaging, testing, and monitoring, but with some additional wrinkles. The following graphic was created by Rich Mogull to show both the flow of code, from check-in to deployment, and many of the tools that provide automation support. Upon a successful completion of a CI cycle, the results feed the Continuous Deployment (CD) process. And CD takes another giant step forward in terms of automation and resiliency. CD continues the theme of building in tools and infrastructure that make development better _first, and functions second. CD addresses dozens of issues that plague code deployments, specifically error prone manual changes and differences in revisions of supporting libraries between production and dev. But perhaps most important is the use of the code and infrastructure to control deployments and rollback in the event of errors. We’ll go into more detail in the following sections. This is far from a complete description, but hopefully you get enough of the basic idea of how it works. With the basic mechanics of DevOps in mind, let’s now map security in. The differences between what you do today should stand in stark contrast to what you do with DevOps. Security Integration From An SDLC Perspective Secure Development Lifecycle’s (SDLC), or sometimes called Secure Software Development Lifecycle’s, describe different functions within software development. Most people look at the different phases in an SDLC and think ‘Waterfall Development process’, which makes discussing SDLC in conjunction with DevOps seem convoluted. But there are good reasons for doing this; Architecture, design, development, testing and deployment phases of an SDLC map well to roles in the development organization regardless of development process, and they provide a jump-off point for people to take what they know today and morph that into a DevOps framework. Define Operational standards: Typically in the early phases of

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.