Securosis

Research

Firestarter: It’s Not My Fault!

Rich, Mike, and Adrian each pick a trend they expect to hammer us in 2015. Then they talk about it, probably too much. From threat intel to tokenization to SaaS security. And oh, we did have to start with a dig at the Pats. Cheating? Super Bowl? Really? Come on now. Watch or listen: Share:

Share:
Read Post

Applied Threat Intelligence: Building a TI Program

As we wrap up our Applied Threat Intelligence series, we have already defined TI and worked our way through a number of the key use cases (security monitoring, incident response, and preventative controls) where TI can help improve your security program, processes, and posture. The last piece of the puzzle is building a repeatable process to collect, aggregate, and analyze the threat intelligence. This should include a number of different information sources, as well as various internal and external data analyses to provide context to clarify what the intel means to you. As with pretty much everything in security, handing TI is not “set and forget”. You need to build repeatable process to select data providers and continually reassess the value of those investments. You will need to focus on integration; as we described, data isn’t helpful if you can’t use it in practice. And your degree of comfort in automating processes based on threat intelligence will impact day-to-day operational responsibilities. First you need to decide where threat intelligence function will fit organizationally. Larger organizations tend to formalize an intelligence group, while smaller entities need to add intelligence gathering and analysis to the task lists of existing staff. Out of all the things that could land on a security professional, an intelligence research responsibility isn’t bad. It provides exposure to cutting-edge attacks and makes a difference in your defenses, so that’s how you should sell it to overworked staffers who don’t want yet another thing on their to-do lists. But every long journey begins with the first step, so let’s turn our focus to collecting intel. Gather Intelligence Early in the intelligence gathering process you focused your efforts with an analysis of your adversaries. Who they are, what they are most likely to try to achieve, and what kinds of tactics they use to achieve their missions – you need to tackle all these questions. With those answers you can focus on intelligence sources that best address your probable adversaries. Then identify the kinds of data you need. This is where the previous three posts come in handy. Depending on which use cases you are trying to address you will know whether to focus on malware indicators, compromised devices, IP reputation, command and control indicators, or something else. Then start shopping. Some folks love to shop, others not so much. But it’s a necessary evil; fortunately, given the threat intelligence market’s recent growth, you have plenty of options. Let’s break down a few categories of intel providers, with their particular value: Commercial: These providers employ research teams to perform proprietary research, and tend to attain highly visibility by merchandising findings with fancy exploit names and logos, spy thriller stories of how adversary groups compromise organizations and steal data, and shiny maps of global attacks. They tend to offer particular strength regarding specific adversary classes. Look for solid references from your industry peers. OSINT: Open Source Intelligence (OSINT) providers specialize in mining the huge numbers of information security sources available on the Internet. Their approach is all about categorization and leverage because there is plenty of information available free. These folks know where to find it and how to categorize it. They normalize the data and provide it through a feed or portal to make it useful for your organization. As with commercial sources, the question is how valuable any particular source is to you. You already have too much data – you only need providers who can help you wade through it. ISAC: There are many Information Sharing and Analysis Centers (ISAC), typically built for specific industries, to communicate current attacks and other relevant threat data among peers. As with OSINT, quality can be an issue, but this data tends to be industry specific so its relevance is pretty well assured. Participating in an ISAC obligates you to contribute data back to the collective, which we think is awesome. The system works much better when organizations both contribute and consume intelligence, but we understand there are cultural considerations. So you will need to make sure senior management is okay with it before committing to an ISAC. Another aspect of choosing intelligence providers is figuring out whether you are looking for generic or company-specific information. OSINT providers are more generic, while commercial offerings can go deeper. Though various ‘Cadillac’ offerings include analysts dedicated specifically to your organization – proactively searching grey markets, carder forums, botnets, and other places for intelligence relevant to you. Managing Overlap With disparate data sources it is a challenge to ensure you don’t waste time on multiple instances of the same alert. One key to determining overlap is an understanding of how the intelligence vendor gets their data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? You can categorize vendors by their tactics to help you pick the best for your requirements. To choose between vendors you need to compare their services for comprehensiveness, timeliness, and accuracy. Sign up for trials of a number of services and monitor their feeds for a week or so. Does one provider consistently identify new threats earlier? Is their information correct? Do they provide more detailed and actionable analysis? How easy will it be to integrate their data into your environment for your use cases. Don’t fall for marketing hyperbole about proprietary algorithms, Big Data analysis, or staff linguists penetrating hacker dens and other stories straight out of a spy novel. It all comes down to data, and how useful it is to your security program. Buyer beware, and make sure you put each intelligence provider through its paces before you commit. Our last point to stress is the importance of short agreements, especially up front. You cannot know how these services will work for you until you actually start using them. Many of these intelligence companies are startups, and might not be around in 3 or 4 years. Once you identify a set of core intelligence

Share:
Read Post

Submit for the RSA Crowdsourced Track

Over the years the RSA Conference has racked up some (legitimate) criticism that its session selection process was too opaque, started too early for up-to-date content, and didn’t always reflect the community at large. I am a bit biased because I have been involved with RSAC for a while now, and talk to the organizers year round, but I know they make a concerted effort to deal with these issues. (No, I’m not on any of the selection committees). For example they can’t really release the names of the track leads since there is a swarm (or is that murder?) of PR and marketing pros who are paid to get their representatives on stage, no matter what. I guarantee you that if those names get out, those individuals will be hammered directly. The early Call For Papers? This is a large event with a ton of tracks and a selection process. Hold the CFP too close to the event and it opens yet more cans of messes. Community representation? Funny you ask! This year RSAC has dedicated an entire track to crowdsourced submissions. The goal is to directly address all the criticism above: Submissions are open until March 12, only a month before the conference. Anyone can submit, but corporate presentations will most definitely be scrutinized. The community will vote to pick the best sessions. Anyone can vote – not just RSAC attendees! RSAC attendee votes get weighted more, which should help reduce gaming the system. The final selections will be by a public panel, based on the top 25 vote receivers. The panel is comprised of known entities, who are used to dealing with PR and marketing techniques. Yes, I am on the panel. I also feel honored that they approached me early to get ideas and feedback on this concept. They have put a lot of thought into this (especially Britta Glade, who probably hates me for calling her out). It won’t be perfect but it’s version 1.0. If you always wanted to speak at RSA, but couldn’t get through the process, give it a shot. This is a great chance for new speakers, late-breaking research, and creative sessions. Share:

Share:
Read Post

Even if Anthem Had Encrypted, It Probably Wouldn’t Have Helped

Earlier today in the Friday Summary I vented frustrations at news articles blaming the victims of crimes, and often guessing at the facts. Having been on the inside of major incidents that made the international news (more physical than digital in my case), I know how little often leaks to the outside world. I picked on the Wired article because it seemed obsessed with the lack of encryption on Anthem data, without citing any knowledge or sources. Just as we shouldn’t blindly trust our government, we shouldn’t blindly trust reporters who won’t even say, “an anonymous source claims”. But even a broken clock is right twice a day, and the Wall Street Journal does cite an insider who says the database wasn’t encrypted (link to The Verge because the WSJ article is subscription-only). I won’t even try too address all the issues involved in encrypting a database. If you want to dig in we wrote a (pretty good) paper on it a few years ago. Also, I’m very familiar with the healthcare industry, where encryption is the exception more than the rule. Many of their systems simply can’t handle it due to vendors not supporting it. There are ways around that but they aren’t easy. So let’s look at the two database encryption options most likely for a system like this: Column (field) level encryption. Transparent Database Encryption (TDE). Field-level encryption is complex and hard, especially in large databases, unless your applications were designed for it from the start. In the work I do with SaaS providers I almost always recommend it, but implementation isn’t necessarily easy even on new systems. Retrofitting it usually isn’t possible, which is why people look at things like Format Preserving Encryption or tokenization. Neither of which is a slam dunk to retrofit. TDE is much cleaner, and even if your database doesn’t support it, there are third party options that won’t break your systems. But would either have helped? Probably not in the slightest, based on a memo obtained by Steve Ragan at CSO Online. The attacker had proficient understanding of the data platforms and successfully utilized valid database administrator logon information They discovered a weird query siphoning off data, using valid credentials. Now I can tell you how to defend against that. We have written multiple papers on it, and it uses a combination of controls and techniques, but it certainly isn’t easy. It also breaks many common operational processes, and may not even be possible depending on system requirements. In other words, I can always design a new system to make attacks like this extremely hard, but the cost to retrofit an existing system could be prohibitive. Back to Anthem. Of the most common database encryption implementations, the odds are that neither would have even been much of a speed bump to an attack like this. Once you get the right admin credentials, it’s game over. Now if you combined with multi factor authentication and Database Activity Monitoring, that would have likely helped. But not necessarily against a persistent attacker with time to learn your systems and hijack legitimate credentials. Or perhaps encryption that limited access based on account and process, assuming your DBAs never need to run big direct queries. There are no guarantees in security, and no silver bullets. Maybe encrypting the database would have helped, but probably not the way most people do it. But it sure makes a nice headline. I am starting a new series on datacenter encryption and tokenization Monday, which will cover some of these issues. Not because of the breach – I am actually already 2 weeks late. Share:

Share:
Read Post

Summary: Analyze, Don’t Guess

Rich here, Another week, another massive data breach. This morning I woke up to a couple interview requests over this. I am always wary of speaking on incidents based on nothing more than press reports, so I try to make clear that all I can do is provide some analysis. Maybe I shouldn’t even do that, but I find I can often defuse hyperbole and inject context, even without speaking to the details of the incident. That’s a fine line any of us on press lists walk. To be honest, more often than not I see people fall into the fail bucket by making assumptions or projecting their own bias. Take this Anthem situation. I kept my comments along the lines of potential long-term issues for people now suffering exposed personal information (for example a year of credit monitoring is worthless when someone loses your Social Security Number). I was able to talk about who suffers the consequences of these breaches, trends in long-term impacts on breached companies, and the weaknesses in our financial and identity systems that make this data valuable. I did all of that without blaming Anthem, guessing as to attribution, or discussing potential means and motivations. Those are paths you can consider if you have inside information (verified, of course), but even then you need to be cautious. It was disappointing to read some of the articles on this breach. One in particular stood out because it was from a major tech publication, and the reporter seemed more interested in blaming Anthem and looking smarter than anything else. This is the same person who seriously blew it on another story recently due to the same hubris (but no apologies, of course). There is a difference between analyzing and guessing, and it is often hubris. Analysis means admitting what you don’t know, and challenging and doubting your own assumptions. Constantly. I have a huge fracking ego, and I hate being wrong, but I care more about the truth and facts than being right or wrong. To me, it’s like science. Present the facts and the path to your conclusions, making any assumptions clear. Don’t present assumptions as facts, and always assume you don’t know everything and what you do know changes sometime. Most of the time. And for crap’s sake, enough with blaming the victim and thinking you know how the breach occurred when you don’t have a single verified source (if you have one, put it in the article). Go read Dennis Fisher’s piece for how to play it straight and still make a point. Unless you are Ranum. We all need to bow down to Ranum, who totally gets it. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on Tokenization. Paper on dynamic authorization by Gunnar Peterson (Registration required). Securosis Posts We know, slow week. We blame random acts of sleep deprivation. New Paper: Security and Privacy on the Encrypted Network. Incite 2/4/2015: 30×32. Applied Threat Intelligence: Use Case #3, Preventative Controls. Favorite Outside Posts Adrian: Spy Agencies Secretly Rely On Hackers. One of the best aspects of this profession is being able to expand your mind based on really cool research from security people. Spy organizations would be crazy not to do the same! Look at the names on the list – half the people I follow to learn from because they do really interesting research. Mike: Looking for the Teachable Moments. Never stop learning. It’s a simple as that. Rich: Every Frame a Painting. This is a YouTube channel of short segments of film analysis. I’m a big film geek, and I love dissecting a scene or work and learning more about how films are made. The Jackie Chan one is my favorite so far. If you like it you can donate to support it. JJ: Use The ‘Fire Model’ When You Get Criticized At Work. Editor’s note: I am so glad I don’t have to deal with things like this. I’m probably unemployable at this point -rich. Mortman: The Queen Of Code. History FTW. Mortman (2): A Cybersecurity Wake Up Call for Emergency Managers. Rich should appreciate this one. Research Reports and Presentations Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Top News and Posts Cross Site Scripting vulnerability found in IE 11 Yet Another Flash Patch Fixes Zero-Day Flaw The Oracle of Security Flaws via LiquidMatrix Marriott Android App Left Credit Card Data Vulnerable Security Basics for Docker Who’s Hijacking Internet Routes? WiFi blocking… blocked. There could be legitimate enterprise problems with this. A CIO Perspective on Security in the Cloud U.S. Officials Say Chinese Cyberespionage ‘Needs to Stop’ Share:

Share:
Read Post

New Paper: Security and Privacy on the Encrypted Network

Our Security and Privacy on the Encrypted Network paper tackles setting security policies to ensure that data doesn’t leak out over encrypted tunnels, and that employees adhere to corporate acceptable use policies, by decrypting traffic as needed. It also addresses key use cases and strategies for decrypting network traffic, including security monitoring and forensics, to ensure you can properly alert on security events and investigate incidents. We include guidance on how to handle human resources and compliance issues because an increasing fraction of network traffic is encrypted. Check out this excerpt to get a feel for why you will encrypt and decrypt more on networks in the near future: Trends (including cloud computing and mobility) mean organizations have no choice but to encrypt more traffic on their networks. Encrypting the network prevents adversaries from sniffing traffic to steal credentials and ensures data moving outside the organization is protected from man-in-the-middle attacks. So we expect to see a much greater percentage of both internal and external network traffic to be encrypted over the next 2-3 years. We would like to thank Blue Coat for licensing the content in this paper. Without our licensees you’d be paying Big Research big money to get a fraction of the stuff we publish, free. Check out the landing page for Security and Privacy on the Encrypted Network or download it directly (PDF). Share:

Share:
Read Post

Incite 2/4/2015: 30×32

It was a pretty typical day. I was settled into my seat at Starbucks writing something or other. Then I saw the AmEx notification pop up on my phone. $240.45, Ben Sherman, on the card I use for Securosis expenses. Huh? Who’s Ben Sherman? Pretty sure my bookie’s name isn’t Ben. So using my trusty Google fu I saw they are a highbrow mens clothier (nice stuff, BTW). But I didn’t buy anything from that store. My well-worn, “Crap. My card number got pwned again.” process kicked in. Though I was far ahead of the game this time. I found the support number for Ben Sherman and left a message with the magic words, “blah blah blah fraudulent transaction blah blah,” and amazingly, I got a call back within 10 minutes. They kindly canceled the order (which saved them money) and gave me some details on the transaction. The merchandise was evidently ordered by a “Scott Rothman,” and it was to be shipped to my address. That’s why the transaction didn’t trigger any fraud alerts – the name was close enough and the billing and shipping addresses were legit. So was I getting punked? Then I asked what was ordered. She said a pair of jeans and a shirt. For $250? Damn, highbrow indeed. When I inquired about the size that was was the kicker. 30 waist and 32 length on the jeans. 30×32. Now I’ve dropped some weight, but I think the last time I was in size 30 pants was third grade or so. And the shirt was a Small. I think I outgrew small shirts in second grade. Clearly the clothes weren’t for me. The IP address of the order was Cumming, GA – about 10 miles north of where I live, and they provided a bogus email address. I am still a bit perplexed by the transaction – it’s not like the perpetrator would benefit from the fraud. Unless they were going to swing by my house to pick up the package when it was delivered by UPS. But they’ll never get the chance, thanks to AmEx, whose notification allowed me to cancel the order before it shipped. So I called up AmEx and asked for a replacement card. No problem – my new card will be in my hands by the time you read this. The kicker was an email I got yesterday morning from AmEx. Turns out they already updated my card number in Apple Pay, even though I didn’t have the new card yet. So I could use my new card on my fancy phone and get a notification when I used it. And maybe I will even buy some pants from Ben Sherman to celebrate my new card. On second thought, probably not – I’m not really a highbrow type… –Mike The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Applied Threat Intelligence Use Case #3, Preventative Controls Use Case #2, Incident Response/Management Use Case #1, Security Monitoring Defining TI Network Security Gateway Evolution Introduction Security and Privacy on the Encrypted Network Selection Criteria and Deployment Use Cases The Future is Encrypted Newly Published Papers Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Advanced Endpoint and Server Protection The Future of Security Incite 4 U It’s about applying the threat intel: This post on the ThreatConnect blog highlights an important aspect that may get lost in the rush to bring shiny threat intelligence data to market. As lots of folks, notably Rick Holland and yours truly, have been saying for a while. It’s not about having the data. It’s about using it. The post points out that data is data. Without understanding how it can be applied to your security program, it’s just bits. That’s why my current series focuses on using threat intel within security monitoring, incident response, and preventative controls. Rick’s written a bunch of stuff making similar points, including this classic about how vendors always try to one-up each other. I’m not saying you need (yet another) ‘platform’ to aggregate threat intel, but you definitely need a strategy to make the best use of data within your key use cases. – MR Good enough: I enjoyed Gilad Parann-Nissany’s post on 10 Things You Need To Know about HIPAA Compliance in the Cloud as generic guidance for PHI security in the cloud. But his 10th point really hits the mark: HIPAA is not feared at all. The vast majority of HIPAA fines have been for physical disclosure of PHI, not electronic. While a handful of firms go out of their way to ensure their cloud infrastructure

Share:
Read Post

Applied Threat Intelligence: Use Case #3, Preventative Controls

So far, as we have looked to apply threat intelligence to your security processes, we have focused on detection/security monitoring and investigation/incident response functions. Let’s jump backwards in the attack chain to take a look at how threat intelligence can be used in preventative controls within your environment. By ‘preventative’ we mean any control that is in the flow, and can therefore prevent attacks. These include: Network Security Devices: These are typically firewalls (including next-generation models), and intrusion prevention systems. But you can also include devices such as web application firewalls, which operate at different levels in the stack but are inline and can thus block attacks. Content Security Devices/Services: Web and email filters can also function as preventative controls because they inspect traffic as it passes through and can enforce policies/block attacks. Endpoint Security Technologies: Protecting an endpoint is a broad category, and can include traditional endpoint protection (anti-malware) and new-fangled advanced endpoint protection technologies such as isolation and advanced heuristics. We described the current state of endpoint security in our Advanced Endpoint Protection paper, so check that out for detail on the technologies. TI + Preventative Controls Once again we consider how to apply TI through a process map. So we dust off the very complicated Network Security Operations process map from NSO Quant, simplify a bit, and add threat intelligence. Rule Management The process starts with managing the rules that underlie the preventative controls. This includes attack signatures and the policies & rules that control attack response. The process trigger will probably be a service request (open this port for that customer, etc.), signature update, policy update, or threat intelligence alert (drop traffic from this set of botnet IPs). We will talk more about threat intel sources a bit later. Policy Review: Given the infinite variety of potential monitoring and blocking policies available on preventative controls, keeping the rules current is critical. Keep the severe performance hit (and false positive implications) of deploying too many policies in mind as you decide what policies to deploy. Define/Update/Document Rules: This next step involves defining the depth and breadth of the security policies, including the actions (block, alert, log, etc.) to take if an attack is detected – whether via rule violation, signature trigger, threat intelligence, or another method. Initial policy deployment should include a Q/A process to ensure no rules impair critical applications’ ability to communicate either internally or externally. Write/Acquire New Rules: Locate the signature, acquire it, and validate the integrity of the signature file(s). These days most signatures are downloaded, so this ensures the download completed properly. Perform an initial evaluation of each signature to determine whether it applies within your organization, what type of attack it detects, and whether it is relevant in your environment. This initial prioritization phase determines the nature of each new/updated signature, its relevance and general priority for your organization, and any possible workarounds. Change Management In this phase rule additions, changes, updates, and deletions are handled. Process Change Request: Based on the trigger within the Content Management process, a change to the preventative control(s) is requested. The change’s priority is based on the nature of the rule update and risk of the relevant attack. Then build out a deployment schedule based on priority, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if downtime or changes to application use models are anticipated. Test and Approve: This step includes development of test criteria, performance of any required testing, analysis of results, and release approval of the signature/rule change once it meets your requirements. This is critical if you are looking to automate rules based on threat intelligence, as we will discuss later in the post. Changes may be implemented in log-only mode to observe their impact before committing to blocking mode in production (critical for threat intelligence-based rules). With an understanding of the impact of the change(s), the request is either approved or denied. Deploy: Prepare the target devices for deployment, deliver the change, and return them to normal operation. Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure no disruption to production systems. Audit/Validate: Part of the full process of making the change is not only having the Operations team confirm it during the Deploy step, but also having another entity (internal or external, but not part of Ops) audit it to provide separation of duties. This involves validating the change to ensure the policies were properly updated and matching it against a specific request. This closes the loop and ensures there is a documentation trail for every change. Depending on how automated you want this process to be this step may not apply. Monitor Issues/Tune: The final step of the change management process involves a burn-in period when each rule change is scrutinized for unintended consequences such as unacceptable performance impact, false positives, security exposures, or undesirable application impact. For threat intelligence-based dynamic rules false positives are the issue of most concern. The testing process in the Test and Approve step is intended to minimize these issues, but there are variances between test environments and production networks so we recommend a probationary period for each new or updated rule, just in case. Automatic Deployment The promise of applied threat intelligence is to have rules updated dynamically per intelligence gleaned from outside your organization. It adds a bit of credibility to “getting ahead of the threat”. You can never really get ‘ahead’ of the threat, but certainly can prepare before it hits you. But security professionals need to accustom themselves to updating rules from data. We joke in conference talks about how security folks hate the idea of Skynet tweaking their defenses. There is still substantial resistance to updating access control rules on firewalls or IPS blocking actions without human intervention. But we expect this resistance to ebb

Share:
Read Post

Summary: Heads up

Rich here. Last week I talked about learning to grind it out. Whether it’s a new race distance, or plowing through a paper or code that isn’t really flowing, sometimes you need to just put your head down, set a pace, and keep moving. And sometimes that’s the absolute worst thing to do. I have always been a natural sprinter; attracted both to sports and other achievements I could rocket through with sheer speed and power. I was horrible at endurance endeavors (physical and mental) as a kid and into my early 20’s. I mean, not “pretending to be humble horrible” but “never got the Presidential Physical Fitness thing because I couldn’t run a mile worth a crap” horrible. And procrastinating? Oh my. I had, I shit you not, a note in my file at the University of Colorado not to “cut him any breaks” because I so thoroughly manipulated the system for so long. (8 years of continuous undergrad… you make a few enemies on the way). It was handwritten on a Post-It, right on my official folder. It was in my mid-20’s that I gained the mental capacity for endurance. Mountain rescue was the biggest motivator because only a small percentage of patients fell near roads. I learned to carry extremely heavy loads over long distances, and then take care of a patient at the end. You can’t rely on endurance – we used to joke that our patients were stable or dead, since it isn’t like we could just scoop them off the road (mostly). Grinding is essential, but can be incredibly unproductive if you don’t pop your head up every now and then. Like the time we were on a physically grueling rescue, at about 11,000’, at night, in freezing rain, over rough terrain. Those of us hauling the patient out were turning into zombies, but someone realized we were hitting the kind of zone where mistakes are made, people get hurt, and it was time to stop. Like I said before: “stable or dead”, and this guy was relatively stable. So we stopped, a couple team members bunkered in with him for the night, and we managed to get a military helicopter for him in the morning. (It may have almost crashed, but we won’t talk about that.) It hadn’t occurred to me to stop; I was too deep in my inner grind, but it was the right decision. Just like the problem I was having with some code last year. It wouldn’t work, no matter what I did, and I kept trying variation after variation. I hit help forums, chat rooms, you name it. Then I realized it wasn’t me, it was a bug (this time) in the SDK I was using. Only when I tried to solve the problem from an entirely new angle, instead of trying to fix the syntax, did I figure it out. The cloud, especially, is funny that way. Function diverges from documentation (if there is any) much more than you’d think. Just ask Adrian about AWS SNS and undocumented, mandatory, account IDs. In security we can be particularly prone to grinding it out. Force those logs into the SIEM, update all the vulnerable servers before the auditor comes back, clear all the IDS alerts. But I think we are at the early edge of a massive transition, where popping our heads up to look for alternatives might be the best approach. ArcSight doesn’t have an AWS CloudTrail connector? Check out a hybrid ELK stack or cloud-native SIEM. Tired of crash patching for the next insert pseudo-cool name here vulnerability? Talk to your developers about autoscaling and continuous deployment. Every year I try to block out a week, or at least a few half-days, to sit back, focus on research, and see which of my current assumptions and work patterns are wrong or no longer productive. Call it “active resting”. I think I have come up with some cool stuff for this year, both in my work habits and security controls. Now I just need time to play with the code and configurations, to see if any of it actually works. But unlike my old patients, my code and writing seem to be both unstable and dead, so I won’t get my hopes too high. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich gave a webcast on the SaaS security lifecycle for SkyHigh Networks. Rich quoted on IoT security. Mostly about the hype. Dave Lewis on DDoS Attacks Continue To Rise in Forbes. Favorite Securosis Posts Rich: There’s a lot of hype on threat intel. Mike is doing a great job showing how to actually use the stuff, as in Applied Threat Intelligence: Use Case #2, Incident Response/Management. Mike: New Paper: Monitoring the Hybrid Cloud. Adrian and I are ahead of the general market, but if you aren’t thinking about how you will monitor cloud stuff you will be behind the curve (and the 8-ball) before long. Other Securosis Posts Incite 1/28/2015: Shedding Your Skin. Applied Threat Intelligence: Use Case #1, Security Monitoring. Firestarter: 2015 Trends. New Paper: Monitoring the Hybrid Cloud. Applied Threat Intelligence: Defining TI. Favorite Outside Posts Mortman: A complete guide to Puppy Bowl XI. Editor’s note: we need to pay more attention to how Mort spends his free time. Rich: Glenn Fleishman on the risk and problems posed by Internet connected devices. Yep, major DDoS attacks now rely on thousands of home routers. It’s an interesting (and real) scenario. Mike: Security Should Be the Top Driver for DevOps – Stormy makes a great point here: if security is going to be relevant moving forward, we had better grok and integrate these DevOps principles. Period. Research Reports and Presentations Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The

Share:
Read Post

Incite 1/28/2015: Shedding Your Skin

You are constantly changing. We all are. You live, you learn, you adapt, you change. It seems that if you pay attention, every 7-9 years or so you realize you hardly recognize the person looking back at you from the mirror. Sometimes the changes are very positive. Other times a cycle is not as favorable. That’s part of the experience. Yet many people don’t think anything changes. They expect the same person year after year. I am a case in point. I have owned my anger issues from growing up and my early adulthood. They resulted in a number of failed jobs and relationships. It wasn’t until I had to face the reality that my kids would grow up in fear of me that I decided to change. It wasn’t easy, but I have been working at it diligently for the past 8 years, and at this point I really don’t get angry very often. But lots of folks still see my grumpy persona, even though I’m not grumpy. For example I was briefing a new company a few weeks ago. We went through their pitch, and I provided some feedback. Some of it was hard for them to hear because their story needed a lot of work. At some point during the discussion, the CEO said, “You’re not so mean.” Uh, what? It turns out the PR handlers had prepared them for some kind of troll under the bridge waiting to chew their heads off. At one point I probably was that troll. I would say inflammatory things and be disagreeable because I didn’t understand my own anger. Belittling others made me feel better. I was not about helping the other person, I was about my own issues. I convinced myself that being a douche was a better way to get my message across. That approach was definitely more memorable, but not in a positive way. So as I changed my approach to business changed as well. Most folks appreciate the kinder Incite I provide. Others miss crankypants, but that’s probably because they are pretty cranky themselves and they wanted someone to commiserate over their miserable existence. What’s funny is that when I meet new people, they have no idea about my old curmudgeon persona. So they are very surprised when someone tells a story about me being a prick back in the day. That kind of story is inconsistent with what they see. Some folks would get offended by hearing those stories, but I like them. It just underscores how years of work have yielded results. Some folks have a hard time letting go of who they thought you were, even as you change. You shed your skin and took a different shape, but all they can see is the old persona. But when you don’t want to wear that persona anymore, those folks tend to move out of your life. They need to go because don’t support your growth. They hold on to the old. But don’t fret. New people come in. Ones who aren’t bound by who you used to be – who can appreciate who you are now. And those are the kinds of folks you should be spending time with. –Mike Photo credit: “Snake Skin” originally uploaded by James Lee The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Applied Threat Intelligence Use Case 2: Incident Response/Management Use Case 1: Security Monitoring Defining TI Network Security Gateway Evolution Introduction Security and Privacy on the Encrypted Network Selection Criteria and Deployment Use Cases The Future is Encrypted Newly Published Papers Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Advanced Endpoint and Server Protection The Future of Security Incite 4 U Click. Click. Boom! I did an interview last week where I said the greatest security risk of the Internet of Things is letting it distract you from all of the other more immediate security risks you face. But the only reason that is even remotely accurate is because I don’t include industrial control systems, multifunction printers, or other more traditional ‘things’ in the IoT. But if you do count everything connected to the Internet, some real problems pop up. Take the fuel gauge vulnerability just released by H D Moore/Rapid 7. Scan the Internet, find hundreds of vulnerable gas stations, all of which could cause real-world kinetic-style problems. The answer always comes back to security basics: know the risk, compartmentalize, update devices, etc. Some manufacturers are responsible, others not so much, and as a security pro it is worth factoring this reality into your risk profile. You know, like, “lightbulb risk:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.