Securosis

Research

Increasing the Cost of Compromise

It seems to be all threat intelligence all the time in the tech media, so I might as well jump on the bandwagon. My pals Wendy Nather of 451 and Jamie Blasco of AlienVault recently did a webcast on the topic. Dan Raywood has a good overview of the content. Wendy does the analyst thing and categorizes the different types of threat intelligence. She points out that sharing is taking place, but more slowly than it should. Jamie then makes a compelling case for why everyone should share threat intel when possible. Shared intelligence increases the cost of compromise. …by removing the secretive aspect, (i.e vendors keeping their threat intelligence close to their chests and monetising it – instead of making it freely available) we can force attackers to raise the bar and spend more and more money on their infrastructure, which decreases the return on investment for cyber criminals. Attackers make crazy money leveraging their tactics. They can buy an inexpensive attack kit (with Bitcoins) and use it a zillion times. If you aren’t talking to your buddy, you don’t know what to look for. If you don’t have a list of C&C nodes or patterns of exfiltration, then when they hit you it won’t immediately raise an alarm. And you will lose. By sharing information we can force attackers to change their attacks more frequently. They will need to turn over botnet nodes faster. Let’s cost them more to do business. Can we make enough difference for them to give up and stop attacking? NFW. They will still make a ton of coin, but over a long enough period this kind of information sharing can get rid of less sophisticated attackers who would make more money doing something legit – you know, like gaming search engine results. Photo credit: “Cento’s Prices (Awesome sign)” originally uploaded by Dave Fayram Share:

Share:
Read Post

Trends In Data Centric Security: Use Cases

After a short hiatus we are back with the next installment of our Data Centric Security series. This post will discuss why customers are interested in this approach, and specific use cases they are looking to address. It should be no surprise that all these use cases are driven by security or compliance. What’s interesting is why other tools and technologies do not meet their needs. What prompts people to look for a different approach to data security? Those are the questions we will address with today’s post. NoSQL / Big Data Security The single biggest reason we are asked about data centric security models is “Big Data”: moving information into NoSQL analytics clusters. Big data systems are simply a new type of database that facilitates fast analysis and lookup capabilities on much larger data sets – at a dramatically lower cost – than previously possible. To get the most out of these databases, lots of data is collected from dozens of sources. The problem is that many sources fall under one or more regulatory controls and contain sensitive data, but big data projects are typically started outside regulatory or IT guidance. As the custodians become aware of their responsibility for the NoSQL data and services, they realize they are unable to adequately secure the cluster – or even know exactly what it contains. To aggravate the problem, reporting and data controls within NoSQL databases are often deficient or completely unavailable. But NoSQL databases have proven their value, and offer previously unavailable scale for analytics, meaning genuine value to the organization. Unfortunately they are often too immature for enterprises to fully trust. Data centric security provides critical security for systems which process sensitive data but cannot themselves be fully trusted, so this approach is very attractive for either protecting data before moving it into a big data repository or transforming existing data into something non-sensitive which can be analyzed but does not need to be secured. The term for this process is “data de-identification”. Examples include substitution of an individual’s Social Security Number with a random number that could be an SSN, or a person’s name with a name randomly chosen or assembled from a directory, or a date with a random proximate date. In this way the original sensitive data is removed entirely, but the value of the data set is retained for analysis. We will detail how later in this series. Cloud and Data Governance Most countries have laws on how citizen data must be secured, outlining custodial responsibilities for companies which store and manage it. These laws differ on which data must be secured, which controls are acceptable, and what is required in case of a breach of sensitive data. If your IT systems are all within a single data center, in a single location under your control, you only need worry about your local laws. But cloud computing make compliance much more complex, especially in public clouds. First, cloud service providers are legally third parties, with deliberately opaque controls and limited access for tenants (customers like you). Second, for reliability and performance many cloud data centers are located in multiple geographic locations, with different laws. This means multiple – possibly conflicting – regulations apply to sensitive data, and you share responsibility with your cloud service providers. The legal issues break down into three type: functional, jurisdictional, and contractual. Functional issues include how legal discovery is performed, what happens in the event of a subpoena or legal hold, proof of data guardianship, and legal seizure in multi-tenant environments. Jurisdictional issues require you to understand applicable legislation, under what circumstances the law applies, and how legal processes differ. Contractual issues cover access to data, data lifecycle management, audit rights, contract termination, and a whole heap of other issues including security and vulnerability management. Data governance and legal issues require substantial research and knowledge to implement polices, often at great expense. Many firms want to leverage low-cost, on-demand cloud computing resources, but hesitate at the huge burden of data governance in and across cloud providers. This is a case where data centric security can reduce compliance burdens and resolve many legal issues. This typically means fewer reports, fewer controls, and less complexity to manage. PHI Queries on how to address HIPAA and Protected Health Information (PHI) were almost non-existent a couple years ago, but we are now asked with increasing frequency. Health care data encompasses many different kinds of sensitive data, and the surrounding issues are complex. A patient’s name is sensitive data in some contexts. Medical history, medications, age, and just about every other piece of data is critical to some audiences, but too sensitive to shared with others. Some patients’ data can be shared in certain limited cases, but not in others. And there many audiences for PHI: state and federal governments, hospitals, insurance companies, employers, organizations conducting clinical trials, pharmaceutical companies, and many more. Each audience has its own relevant data subset and restrictions on access. Data centric security is in use today, providing carefully selected subsets of the complete original data to different audiences, and surrogate data for elements which are required but not permitted. As data storage and management systems become cheaper, faster, and more powerful, providing a unique subset to each audience has become feasible. Each recipient can securely access its own copy, containing only its permitted data. Data centric security enables organizations to provide just those data elements which partners need, without exposing data they cannot access. And this can all be done in real time on demand, by applying appropriate controls to transform the original data into the secured subset. Many tools and techniques developed over the last several years for test data management are now employed to generate custom data sets for individual partners on an ongoing basis. Payment Card Security Tokenization for credit card security was the first data centric security approach to be widely accepted. Hundreds of thousands of organizations replace credit card numbers with data surrogates. Some

Share:
Read Post

Incite 7/2/2014 — Relativity

As you get older time seems to move faster. There may be something to these theories of Einstein. It’s hard to believe that yesterday was July 1. That means half of 2014 is in the rear view mirror. HALF. That’s unbelievable to me. Time is flying at the speed of light. I look at the list of things I wanted to do and it’s still largely unfinished. I did a bunch of things I didn’t expect to be doing. Though I guess that’s always the case. Back when I was flying solo at Security Incite, I would revisit my trends for the year and see what I got right and what not so much. We don’t do formal trends, though we do post our ideas for the coming year in our RSA Guide. We don’t really go back and check on those, so maybe I’ll do that over winter break. But right now, there is other work to be done. You see we are all in the maelstrom. It has been a crazy 6 months. The business keeps increasing in scale. We don’t. So it’s been sleep that fell off my table. I’m holding up pretty well, if I do say so myself. Maybe there is something to this healthy mindful lifestyle I’m working toward. Though I’m very cognizant of the fact these are first world problems. And on a relative basis, things probably couldn’t be going much better. Not while allowing us the flexibility we have running our own business. And no, I’m definitely not looking for sympathy that I’m working with great clients, doing cool projects. That my research agenda, which candidly was pretty opportunistic, turned out to be pretty close to what’s happening. That 5 years in our clients know what we do and how we do it, and continue to come back for me. These are good problems to have. It’s a good gig, and we all know it and are very thankful. But there is always that little voice in the back of my head. That little reminder that what goes up, eventually comes down. I have been around too long to think I have figured out how to suspend the laws of physics. That Einstein guy again! Bah! To be clear, I’m not doing this in a fearful or paranoid way. It’s not about me being scared that something will go wrong. It’s about wanting to be ready when it does. So I let my unconscious mind churn through the scenarios. While meditating I will indulge my internal planner for a short time to make sure I know how to respond. And then I let it go. The good news is this doesn’t consume me – not in the least. I’m not naive, so I know you need to assess all the possibilities. But I don’t assess them for long. I mean who has time for that? –Mike Photo credit: “Speed of Light” originally uploaded by John Talbot The fine folks at the RSA Conference posted the talk Jennifer Minella and I gave on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR April 14 – Three for Five March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide (Update) Mobile Endpoint Security Management Trends in Data Centric Security Introduction Open Source Development and Application Security Analysis Development Trends Application Security Introduction Understanding Role-based Access Control Advanced Concepts Introduction NoSQL Security 2.0 Understanding NoSQL Platforms Introduction Newly Published Papers Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing Incite 4 U Sell yourself: Epic post by Dave Elfering about the need to sell. Everyone sells. No matter what you do you are selling. In the CISO context you are selling your program and your leadership. As Dave says, “To truly lead and be effective people have to be sold on you; on what and who you are.” Truth. If your team (both upstream / senior management and downstream / security team) isn’t sold on you, you can’t deliver news they need to hear. And you’ll be delivering that news a lot – you are in security, right? That post just keeps getting better because it discusses the reality of leading. You need to know yourself. You need to be yourself. More wisdom: “Credentials and mad technical skills are great, but they’re not who you are. Titles are great, but they’re not who you are. Who you are is what you truly have to sell and the leader who instead relies on Machiavellian methods to self-serving ends is an empty suit.” If you can’t be authentic you can’t lead. Well said, Dave. – MR Security pin-up: Australia plans a rollout of PIN (Personal Identification Number) codes for credit and debit card transactions later this year. The Australian payment processors association’s current report shows total card fraud rates have doubled between 2008 and 2013. While the dollar amount per

Share:
Read Post

Firestarter: G Who Shall Not Be Named

As they fight to keep the Firestarter running through Google outages, vacations, and client travel, our dynamic trio return once again. This week they discuss some of the latest news from a particular conference held out in Washington DC last week which Mike stopped by (well, the lobby bar) and Rich used to help run. The audio-only version is up too.   Share:

Share:
Read Post

Updating the Endpoint Security Buyer’s Guide: Mobile Endpoint Security Management

In a rather uncommon occurrence, we are updating one of our papers within a year of publication. As shown by our recent deep dive into Advanced Endpoint and Server Protection, endpoint security is evolving pretty quickly. As mentioned in the latest version of our Endpoint Security Buyer’s Guide, mobile devices are just additional endpoints that need to be managed like any other device. But it has become clear that we need to dig a bit deeper into securing mobile endpoints, so we will. But the change requires a bit of context. We have said for years that management is the first problem users solve when introducing a new technology. Security comes only after management issues are under control. That has certainly been true of mobile devices, as evidenced by the rapid growth, maturity, and consolidation of Mobile Device Management (MDM) technologies. But you cannot really separate management from protection in the mobile endpoint context, as demonstrated by the fact that security features appeared very early among MDM offerings. Mobile devices are inherently better protected from malware attacks due to more modern mobile operating system architectures; so hygiene – including patching, configuration, and determining which applications can run on devices – becomes their key security requirement. This means there is leverage to gain by integrating mobile devices into the device management stack (where applicable) to enforce consistent policy regardless of device, ownership (for BYOD), or location. This has driven significant consolidation of mobile management companies into broader IT management players. In this update of the Endpoint Security Buyer’s Guide we will dig into mobile endpoint security management, defining more specifically what needs to be managed and protected. But most of all, we will focus on the leverage to be gained by managing these capabilities as part of your endpoint security management strategy. Defining Endpoints One of the key points we made early in the Endpoint Security Buyer’s Guide is that the definition of endpoint needs to be more inclusive. From a security standpoint if the device can run applications, access corporate data stores, and store corporate data locally, it is an endpoint and needs to be managed and protected. Smartphones and tablets clearly fit this bill, along with traditional PCs. Organizationally management of all these devices may not fall within a single operations group. That company-specific decision reflects business realities, particularly at large-scale enterprises with thousands of employees and huge IT shops which can afford specialist teams by device. In many smaller companies (the mid-market), we see these operational functions consolidated. But who does the work is less important than what is done to protect mobile endpoints – consistently and efficiently. Managing Endpoint Device Security Hygiene tends to be the main focus for managing mobile endpoint security, so here is a list of what that means in the mobile endpoint context: Enrollment: New devices show up, so registering each device and assigning it proper entitlements begins the process. This is typically handled via a self-service capability so users can register their devices and accept the organization’s policies (especially for employee-owned devices) without waiting for help desk intervention. Of course you cannot assume everyone gaining access will register their devices (especially attackers), so you will want some kind of passive discovery capability to identify unmanaged devices as well. Asset management: Next after enrollment comes the need to understand and track device configuration and security posture, which is really an asset management function. There may be other similar capabilities in use within the organization (such as a CMDB), in which case integration and interoperability with those systems is a requirement. OS configuration: Configuration of mobile endpoints should be based on policies defined by groups and roles within the organizations. These policies typically control many device aspects – including password strength, geolocation, activation lock, and device encryption. OS vendors offer robust and mature APIs to enable this capability, so most platforms offer have similar capabilities. Technology selection largely comes down to leverage managing policies within a consistent user experience across all devices. Patching: Software updates are critical to device security, so ensuring that mobile endpoints are patched in a timely fashion is another key aspect of mobile endpoint security. For mobile devices you will want to be sure you can update devices over the air, as they are often beyond reach of the corporate network, connecting to corporate networks only infrequently. Connectivity: An organization may want to actively control which networks devices use, especially because many public WiFi hotspots are simply insecure. So you will want the ability to specify and enforce policies for which networks devices can use, whether connections require a VPN to backhaul traffic through a central gateway, and whether to use a mobile VPN service to minimize the risk of man-in-the-middle and side-jacking attacks and snooping. Identity/group roles and policies: This capability involves integrating the mobile endpoint security management policy engine with Active Directory or another authoritative identity store. This leverages existing users and groups – managed elsewhere in the organization – to set MDM policies. As you build your mobile endpoint security management strategy, keep in mind that different operating systems offer different hooks and management capabilities. Mature PC operating systems offer one level of management maturity; mobile operating systems are maturing rapidly but don’t offer as much. So to provide a consistent experience and protection across devices you might need to reduce protection to the lowest common denominator of your least capable platform. Alternatively you can choose to support only certain functions on certain devices. For example PCs need to access corporate data (and SaaS application) over the corporate VPN, so they are easier to compromise and present more risk. Whereas more limited mobile devices, with better inherent protection, might be fine with less restrictive policies. This granularity can be established via policies within the endpoint security management platform. Over time MDM platforms will be able to compensate for limitations of underlying operating systems to provide a stronger protection as their capabilities mature. Managing Applications The improved security architectures of mobile operating systems have required attackers to increasingly

Share:
Read Post

Friday Summary: Legal wrangling edition

This week’s intro has nothing to do with security – just a warning in case that matters to you. I’m betting most people spent their spare time this week watching the World Cup. Or perhaps “sick time”, given the apparent national epidemic that suddenly cleared up by Friday. I am not really a ‘football’ fan, but there were some amazing matches and I remain baffled at how a player thought he could get away with biting another player during a match. And then flop and cry that he hurt his mouth! Speechless! But being perverse, I spend most of my spare time this week following a couple court cases. Yes, legal battles. I’m weird that way. The most interesting was O’Bannon v. NCAA up in Oakland California. I am following it because this case has strong potential to completely change college athletics. If you haven’t been paying attention, the essence is that players cannot make money from marketing own their images, but colleges can. For example, a player might be ‘virtualized’ in an EA video game, and the college paid $10M, but the player cannot receive any financial compensation. The NCAA has drawn a line in the sand, and stated that players must receive less than the actual, federal rate for the cost of college attendance. But what gets me is that the NCAA president believes that if a player is in a photo with a product, and receives money from the company, then s/he is being exploited. If s/he is in the same photo, and does not receive money, then s/he is not being exploited. Their uniforms can have corporate logos, and that company can pay the coach to make players advertise their products. The players can be forced to appear in front of banners with corporate logos, and even be forced to drink water from bottles with their corporate logos, but none of that would be exploitation! Not on the NCAA’s watch. Thank goodness the president of the NCAA is there to protect students for these corporate pirates! Here’s a $1.6 million salary for your virtuous hard work, Mark! I joked with a friend recently that I honestly don’t know how we played college football in the 50s, 60s, and 70s without the millions and millions of dollars now being funneled into these programs. How could we have possibly played the game without all this money? I had not seen a game in years, and attended a local college game last fall; I was horrified that one team’s logo and image were completely subsumed by the local corporate sponsors – notably a local Indian casino. Appalled. The casino’s logo was displayed after each touchdown. The audience just clapped as the sponsoring casino paid for fireworks, and who doesn’t love fireworks? As a previous president stated about the NCAA, ‘amateurism’ plays to the participants, not the enterprise. At Texas the football program pays for the entire athletic department, including $5.3M for the head football coach, and still hands back $9M a year to the school. I’m told the University of Alabama grossed well over $100M in one year from its football program’s various revenue sources. Serious. Freaking. Money. From the various reports I am reading, it does not look good for the NCAA. I am not a betting man, but if pushed I would wager on the plaintiff’s side. And at some time in the future, after the appeals, suddenly the students who support this multi-billion dollar industry will get a big piece of the pie. I was rooting for Aereo. Really rooting for Aereo, but they lost their case against the broadcasters. Shot down by the Supreme Court verdict earlier this week. And honestly it’s hard to fault the verdict – give it a read. This is a huge win for broadcasters and cable carriers, and a serious loss for viewers. When it comes down to it Aereo is re-broadcasting others’ content and making a profit off it. We are not keen at Securosis when content aggregation sites routinely bundle our posts and sell advertising around it either. Still, why the hell can’t the broadcasters make this work and provide the content in a way users want? The broadcasting rules and contracts really need to change to allow some innovation, or viewers will ultimately go somewhere else to get what they want. As a consumer I am miffed that something provided over the air, for free, can’t be sent to me if I want to watch it (if you have ever lived just out of sight of a broadcast tower where so you got crappy reception, you know exactly what I am talking about). Or put it on your DVR. Or whatever private use you want to make of it – the customers you broadcast it to might actually want to watch the content at some convenient place and time. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Webcast on the Open Source Development and Application Security Survey. Favorite Securosis Posts Adrian Lane: Open Source Development Analysis: Development Trends. Mike Rothman: Knucklehead-Employee.com. Yeah, it’s mine. But it’s still too damn funny. And I got to bust out the memegenerator. So it’s a win all around. Other Securosis Posts Incite 6/25/2014: June Daze. Trends in Data Centric Security [New Series]. Open Source Development Analysis: Application Security. Firestarter: Apple and Privacy. Favorite Outside Posts Adrian Lane: BoringSSL. This is not the introduction of BoringSSL, but the authors no BS got tired of waiting for politics to get this crap fixed approach without calling out OpenSSL. Bravo. Dave Lewis: The Akamai State of the Internet Report. James Arlen: Deloitte’s Global Defense Outlook 2014. Mike Rothman: Asymmetry of People’s Time in Security Incidents. Lenny Z does a good job of explaining why poor incident handling/management can make it much more expensive to clean up an attack than it is for the attacker. Be prepared, and change the economics. Unfortunately automated attacks now offer so much leverage that you probably cannot achieve parity. But don’t exacerbate the situation. Research Reports and Presentations Defending Against Network-based Distributed Denial of

Share:
Read Post

Knucklehead-Employee.com

You have to love it when your employees take some initiative and aggressively take it to the competition who is cleaning your clock. They spend their time working the product, refining the messaging, and getting your mojo back in the market, right? Or you can just buy a domain like competitorFAIL.com and post some sophomoric insults at the competition. I’m pretty sure that favorably impacts the sales cycle, though it may more favorably impact the employee’s self-esteem. You might think this is a joke, but it’s not. Some HP ArcSight folks figured that if they couldn’t compete in the market, they might as well just insult Splunk, and that would help. They bought splunkfail.com and posted some zingers like this one on the Tweeter. “Splunk is a security company #AprilFoolsDay.” (April 1, 2014 @splunkfail). Seriously. This really happened. ROFL. Literally – I actually rolled on the floor laughing. The folks at Starbucks were not amused. Neither were the Splunk folks, and they (rightfully) complained to HP’s Ethics officer, who promptly dealt with the situation, resulting in those employees pulling down the site and giving the domain to Splunk. Though HP did claim no responsibility for the rogue employees. Maybe they will accept responsibility for providing an endless stream of LOLs for the rest of us. Share:

Share:
Read Post

Incite 6/25/2014: June Daze

I’m not sure why I ever think I’ll get anything done in June. I do try. I convince myself this year will be different. I look at the calendar and figure I’ll be able to squeeze in some writing. I’m always optimistic that I will be able to crank through it because there is stuff to get done. And then at the end of June I just shrug and say to myself, “Yup, another June gone and not much got done.” That’s not really true. I did a lot of travel. I took some great vacations with the family. I had great meetings with clients. But from a deliverables standpoint, not much got done at all. I shouldn’t be hard on myself because I have been at home a grand total of 30 hours for the entire month thus far. Seriously, 30 hours. Yes, I understand these are first world problems. I mentioned that the girls dance at Disney, then it was off to the west coast for a client meeting. Then I flew across the pond for a couple days in London for the Eskenzi PR CISO forum. For the first time (shocking!), I got to tour around London and it was great. What a cool city! Duh. As I mentioned in Solo Exploration I’ve made a point to explore cities I visit when possible, and equipped with my trusty mini-backpack I set out to see London. And I did. I saw shows. I checked out the sites with the rest of the tourists. I took selfies (since evidently that’s what all the kids do today). I met up with some friends of friends (non-work related) and former colleagues who I don’t get to see enough. It was great. But right when I got home, it was a frantic couple hours of packing to get ready for the annual beach trip with my in-laws. Yup, told you this was a first world problem. I did work a bit at the beach, but that was mostly to make sure I didn’t drown when I resurfaced today. I also had some calls to do since I wasn’t able to do them earlier in the month, and given that I commit to family time by noon, there wasn’t a lot of time to write. There never is in June. Then last Sunday we dropped the kids off for their 6+ weeks of camp and I spent another couple days meeting friends and clients in DC around a certain other analyst firm’s annual security conference. So by the time we packed up the van and headed back to ATL yesterday, I have basically been gone the entire month. Now I have a few days in ATL to dig out and then it’s another quick trip next week. Yes, this is the life I chose. Yes, I really enjoy the work. And yes, I’m in a daze and it won’t slow down until the middle of July. Then I’ll get to bang through the backlog and start work on summer projects. I could make myself crazy about what’s not getting done, or I can take a step back and remember things are great. I choose the latter, so I’ll get done what I can and smile about it. I will be sure to be a bit more realistic about what will get done next June. Until I’m not. –Mike Photo credit: “Daze” originally uploaded by Clifford Horn The fine folks at the RSA Conference posted the talk Jennifer Minella and I gave on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR April 14 – Three for Five March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Feb 21 – Happy Hour – RSA 2014 Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Trends in Data Centric Security Introduction Open Source Development and Application Security Analysis Development Trends Application Security Introduction Understanding Role-based Access Control Advanced Concepts Introduction NoSQL Security 2.0 Understanding NoSQL Platforms Introduction Newly Published Papers Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing Incite 4 U Problem fixed. Now clean up your mess. Yes, some 300k sites have yet to patch the OpenSSL ‘Heartbleed’ vulnerability, but a more troubling issue is that residual leaked data will cause ongoing problems, as Robert Hansen illustrated in The Ghost of Information Disclosure Past. Many vulnerable sites had credentials scraped, and while they asked their users to reset their passwords, they did not force resets. Attackers now have accumulated credentials which can provide fun and mayhem for anyone with 5 Bitcoins. The Heartbleed cleanup is messy, and in cases where (potentially) all user passwords could be compromised, it is best to “nuke from orbit” and require resets for all registered users. No one said it was easy, right? – AL You too can be a security person: There is no doubting the skills shortage in security. We routinely talk to folks who have open positions for 6-12 months and they are significantly compromising on the skills & capabilities of candidates.

Share:
Read Post

Trends in Data Centric Security [New Series]

It’s all about the data. The need of many different audiences to derive value from data is driving several disruptive trends in IT. The question that naturally follows is “How do you maintain control over data regardless of where it moves?” If you want to make data useful, by using it in as many places as you can, but you cannot guarantee those places are secure, what can you do? Today we launch a new series on Data Centric Security. We are responding to customer inquiries about what to do when moving data to locations they do not completely trust. The majority of these inquires are motivated by “big data” usage as firms move data into NoSQL clusters. The gist is that we don’t know how to secure these environments, we don’t really trust them, and we don’t want a repeat of data leakage or compliance violations. Here at Securosis we have blogged about NoSQL security for some time, but the specifics of customerinterest came as a surprise. They were not asking “How do I secure Hadoop?” but instead “How do I protect data?” with specific interest in tokenization and masking. An increasing number of firms are asking about data security for cloud environments and HIPPA compliance – again, more focused on data rather than system controls. This is what Data Centric Security (DCS) does: embed security controls into the data, rather than into applications or supporting infrastructure. The challenge is to implement security controls that do not not render the data inert. Put another way, they want to derive value from data without leaving it exposed. Sure, we can encrypt everything, but you cannot analyze encrypted data. To decrypt within the environment means distributing keys and encryption capabilities, implementing identity management, and ensuring the compute platform itself is trustworthy. And that last is impossible when we cannot guarantee the security of the platform. Data Centric Security provides security even when the systems processing data cannot be fully trusted. We can both propagate and use data to derive business value while still maintaining a degree of privacy and security. Sounds like a fantasy, but it’s real. But of course there are challenges, which I will detail later in this series. For now understand that you need to actively select the right security measure for the specific use case. This makes data centric security a form of data management, and requires you to apply security polices, transform the data, and orchestrate distribution. This is not intended to be an exhaustive research effort, but an executive summary of data centric security approaches for a couple emerging use cases. This series will cover: Use Cases: I will outline the top three use cases driving inquiries into data centric security, and specific challenges presented by them. Data Centric Technologies: We will examine a handful of technologies that support data centric security. We will explore tokenization, masking, and data element/format preserving encryption, as well as some other tricks. Data Centric Security Integration: We will discuss how to incorporate DCS into data management operations and deploy these technologies. This is a combination of tools and process, but where you begin your journey affects what you need to do. Our next post will cover DCS use cases. Share:

Share:
Read Post

Open Source Development Analysis: Development Trends

For the final installment of our analysis of the 2014 Open Source Development and Application Security Survey, we will focus on open source development trends. Our topic is less security per se, and more how developers use open source, how it is managed, and how it is perceived in the enterprise. Are open source components more trustworthy than commercial software? An unambiguous question in the survey asked, “Do you believe software assembled with open source is as secure as commercial off-the-shelf (COTS)?” Under 9% said that software assembled with open source is less secure, with over 35% stating they believed open source is more secure than COTS. Even more interesting: survey participants who responded before Heartbleed believed applications assembled using open source components were more secure that COTS was at 34.83%. After Heartbleed: 36.06%. Yes, after a major vulnerability in an open source component used in millions of systems around the globe, confidence in open source security did not suffer. In fact it ticked up a point. Ironic? Amazing? All I can say is I am surprised. What people believe is not necessarily fact. And we can’t really perform a quantitative head-to-head comparison between applications assembled with open source components and COTS security to verify this belief. But the survey respondents deal with open source and commercial software on a daily basis – they are qualified to offer a professional opinion. The net result is for every person who felt COTS was more secure, four felt that open source was more secure. In any sort of popular vote that qualifies as a landslide. Banning components “Has your company ever banned the use of an open source component, library or project?” The majority of respondents, some 78%, said “No”. Still, I have singled this question out as a development practice issue. Something I hear organizations talk about more and more. Software organizations ban components for a number of reasons. Licensing terms might be egregious. Or they might simply no longer trust a component’s reliability or security. For example virtually all released Struts components have severe security exploits, described by critical CVE warnings. Poorly written code has reliability and security issues. The two tend to go hand in hand. You can verify this by looking at bug tracking reports: you will see issues clump together around one or two problematic pieces of software. Banning a module is often politically messy as because it can be difficult to find or build a suitable replacement. But it is an effective, focused way to improve security and reliability. Post-Snowden we have seen increased discussion around trust and whether or not to use certain libraries because of potential subversion by the NSA. This is more of a risk perception issue than more tangible issues such as licensing, but nonetheless a topic of discussion. Regardless of your motivation, banning modules is an option to consider for critical – or suspect – elements of your stack. Open source policies Open source policies were a major focus area for the survey, and the question “Does your company have an open source policy?” was the lead in for several policy related questions. 47% of respondents said they have a policy. When asked, “What are the top three challenges with your open source policy?” the top three responses were that 39% believed that a top challenge is that it does not deal with security vulnerabilities, 41% stated there is little enforcement so workarounds are common, and 35% said what is expected is not clear. This raises the question: What is in an open source policy? The answer dovetails nicely with an early survey question: “When selecting components, what characteristics would be most helpful to you?” That is how you decide. Most companies have a licensing component to their policies, meaning which types of open source licenses are permitted. And most specify versioning and quality controls, such as no beta software. More often than not we see policies around security – such as components with critical vulnerabilities should be patched or avoided altogether. After those items, the contents of open source policies are wide open. They vary widely in how prescriptive they are – meaning how tightly they define ‘how’ and ‘what’. “Who in your organization is primarily responsible for open source policy / governance?” While the bulk of responsibility fell on development managers (34%) and IT architects (24%), much of it landed outside development. Legal, risk, and executive teams are unlikely to craft policies which development can implement easily. So development needs to either take ownership of policies, or work with outside groups to define feasible goals and the easiest route to them. We could spend many pages on policies, but the underlying issue is simple: Policies are supposed to make your life easier. If they don’t, you need to work on the policies. Yes, I know those of you who deal with regulatory compliance in your daily jobs scoff at this, but it’s true. Policies are supposed to help avoid large problems or failures down the road which cost serious time and resources to fix. Here is the simple dividing line: policies written without regard for how they will be implemented, or a clear path to make open source use easier and better, are likely to be bypassed. Just like development processes, policies take work to optimize. Once again, you can find the final results of the survey here. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.