Securosis

Research

It’s Just a Matter of Time

So a couple of weeks ago in the Incite (4th snippet) I gave Jamie Arlen huge kudos for being a soothsayer. At Black Hat 2011 Jamie presented an attack scenario attacking high frequency trading networks, and Bloomberg recently reported that attack actually hit a hedge fund. But the attack never happened. Yeah, it turns out the cyber expert at BAE Systems who identified the attack was allegedly presenting a scenario to the management team – not a real attack. The attack, she said, “was inaccurately presented as a client case study rather than as an illustrative example.” Those folks are spinning so fast, I’m getting dizzy. While laughing my butt off. But back to the point of Jamie’s research. The attack is plausible and feasible, so it’s just a matter of time before it does really happen, if it hasn’t already. Photo credit: “Pants on Fire” originally uploaded by Mike Licht Share:

Share:
Read Post

Listen to Rich Talk, Win a … Ducati?

I have to admit, this is a bit of a first. I am participating in a cloud security webinar July 21st with Elastica, a cloud application security gateway firm (that’s the name I’m playing with for this category). It will be less slides and more discussion, and not about their product. This is a product category I have started getting a lot of questions on, even if there isn’t a standard name yet, and I will probably pop off a research paper on it this fall. But that isn’t the important part. Sometimes clients pony up an iPad or something if you sign up for a webinar. Heck, we’ve given out our fair share of Apple toys (and once a Chumby) to motivate survey participation. This time Elastica is, for real, giving away a Ducati Monster 696. No, I am not eligible to win. I thought it was a joke when they showed me the mockup of the contest page, but it is very real. You still have to pay delivery, title, insurance, customs, transportation, and registration fees. Needless to say, I feel a little pressure to deliver. (Good content – I don’t think they’d let me drive the Ducati to your house). Share:

Share:
Read Post

Summary: Boulder

Well, I did it. I survived over 6 months of weekly travel (the reason I haven’t been writing much). Even the one where the client was worried I was going to collapse due to flu in the conference room, and the two trips that started with me vomiting at home the morning I had to head to the airport. Yup. Twice. But for every challenge, there is a reward, and I am enjoying mine right now. No, not the financial benefits (actually those don’t suck either), but I ‘won’ a month without travel back in my home town of Boulder. I am sure I have written about Boulder before. I moved here when I was 18 and stayed for 15+ years, until I met my wife and moved to Phoenix (to be closer to family because kids). Phoenix isn’t bad, but Boulder is home (I grew up in Jersey but the skiing and rock climbing there are marginal). My goal for this month is to NOT TRAVEL, spend time with the family, and work at a relaxed pace. So far, so good. Heavy travel is hard on kids, especially young kids, and they are really enjoying knowing that when I walk out the door for ‘work’ and hop on my bicycle, I will be back at the end of the day. Boulder has changed since I left in 2006, but I suspect I have changed more. Three kids will do that to you. But after I ignore the massive real estate prices, proliferation of snooty restaurants, and increase in number of sports cars (still outnumbered by Subarus), it’s hard to complain about my home town doing so well. One unexpected change is the massive proliferation of startups and the resulting tech communities. I lived and worked here during the dot com boom, and while Boulder did okay, what I see now is a whole new level. I can’t walk into a coffee shop or lunch spot without overhearing discussions on the merits of various Jenkins plugins or improving metrics for online marketing campaigns. The offices that stood vacant after the loss of Access Graphics are now full of… well… people 10-15 years younger than me. For an outdoor athlete with a penchant for entrepreneurship, it’s hard to find someplace better to take a month-long ‘vacation’. As I hit local meetups (including speaking at the AWS meetup on the 22nd) I am loving engaging with a supportive tech community. Which isn’t a comment on the security community, but a recognition that sometimes it is extremely valuable to engage with a group of innovation-embracing technical professionals who aren’t getting their (personal) asses kicked by criminal and government hackers by the minute. I have always thought security professionals need to spend time outside our community. One of the ways I staved off burnout in emergency services was to have friends who weren’t cops and paramedics – I learned to compartmentalize that part of my life. If you can, check out a local DevOps or AWS meetup. It’s fun, motivating, and they have better swag. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mortman quoted in The 7 skills Ops pros need to succeed with DevOps. Favorite Securosis Posts Adrian Lane: Incite 7/9/2014: One dollar…. One of Mike’s best all year. Rich: Increasing the Cost of Compromise. This is the strategy of Apple and Microsoft at the OS level, and it is paying off (despite common perception). Economics always wins. Well, except in politics. Other Securosis Posts Trends in Data Centric Security: Tools. Open Source Development and Application Security Survey Analysis [New Paper]. Leveraging Threat Intelligence in Incident Response/Management. Trends In Data Centric Security: Use Cases. Incite 7/2/2014 – Relativity. Updating the Endpoint Security Buyer’s Guide: Mobile Endpoint Security Management. Firestarter: G Who Shall Not Be Named. Favorite Outside Posts Adrian Lane: Threat Modeling for Marketing Campaigns. Educational walkthrough of how Etsy examined fraud and what to do about it. Smart people over there… Rich: Ideas to Keep in Mind When Designing User Interfaces. I really enjoy user interface and experience design. Mostly because I enjoy using well-designed systems. This isn’t security specific, but is absolutely worth a read… especially for product managers. Research Reports and Presentations Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Top News and Posts Specially Crafted Packet DoS Attacks, Here We Go Again. Vulnerabilities (fixed) in AngularJS. DHS Releases Hundreds of Documents on Wrong Aurora Project. As my daughter would say, “Seriously?!?”. Microsoft Settles With No-IP Over Malware Takedown. Hackers (from you know where) crack and track shipping information. A great example of a target that doesn’t realize its value. Researchers Disarm Microsoft’s EMET. Mysterious cyberattack compromises more than a thousand power plant systems. Noticing a trend here? Share:

Share:
Read Post

Trends in Data Centric Security: Tools

The three basic data centric security tools are tokenization, masking, and data element encryption. Now we will discuss what they are, how they work, and which security challenges they best serve. Tokenization: You can think of tokenization like a subway or arcade token: it has no cash value but can be used to ride the train or play a game. In data centric security, a token is provided in lieu of sensitive data. The most common use case today is in credit card processing systems, as a substitute for credit card numbers. A token is basically just a random number – that’s it. The token can be made to look just like the original data type; in the case of credit cards the tokens are typically 16 digits long, they usually preserve the last four original numbers, and can even be generated such that they pass the LUHN validation check. But it’s a random value, with no mathematical relationship to the original, and no value other than as a reference to the original in some other (more secure) database. Users may choose to maintain a “token database” which associates the original value with the token in case they need to look up the original at some point in the future, but this is optional. Tokenization has advanced far beyond simple value replacement, and is lately being applied to more advanced data types. These days tokens are not just for simple things like credit cards and Social Security numbers, but also for JSON & XML files and web pages. Some tokenization solutions replace data stored within databases, while others can work on data streams – such as replacing unique cell IDs embedded in cellphone tower data streams. This enables both simple and complex data to be tokenized, at rest or in motion – and tokens can look like anything you want. Very versatile and very secure – you can’t steal what’s not there! Tokenization is used to ensure absolute security by completely removing the original sensitive values from secured data. Random values cannot be reverse engineered back to the original data. For example given a database where the primary key is a Social Security number, tokenization can generate unique and random tokens which fits in the receiving database. Some firms merely use the token as a placeholder and don’t need the original value. In fact some firms discard (or never receive) the original value – they don’t need it. Instead they use tokens simply because downstream applications might break without a SSN or compatible surrogate. Users who need to occasionally reference the original values use token vaults or equivalent technologies. They are designed to only allow credentialed administrators access to the original sensitive values under controlled conditions, but a vault compromise would expose all the original values. Vaults are commonly used for PHI and financial data, as mentioned in the last post. Masking: This is another very popular tool for protecting data elements while retaining aggregate values of data sets. For example we might substitute an individual’s Social Security number with a random number (as in tokenization), or a name randomly selected from a phone book, but retain gender. We might replace date of birth with a random value within X days of the original value to effectively preserve age. This way the original (sensitive) value is removed entirely without randomizing the value of the aggregate data set, to support later analysis. Masking is the principal method of creating useful new values without exposing the original. It is ideally suited for creating data sets which can be used for meaningful analysis without exposing the original data. This is important when you don’t have sufficient resources to secure every system within your enterprise, or don’t fully trust the environment where the data is being stored. Different masks can be applied to the same data fields, to produce different masked data for different use cases. This flexibility exposes much of the value of the original with minimal risk. Masking is very commonly used with PHI, test data management, and NoSQL analytics databases. That said, there are potential downsides as well. Masking does not offer quite as strong security as tokenization or encryption (which we will discuss below). The masked data does in fact bear some relationship to the original – while individual fields are anonymized to some degree, preservation of specific attributes of a person’s health record (age, gender, zip code, race, DoB, etc.) may provide more than enough information to reverse engineer the masked data back to the original data. Masking can be very secure, but that requires selection of good masking tools and application of a well-reasoned mask to achieve security goals while supporting desired analytics. Element/Data Field Encryption / Format Preserving Encryption (FPE): Encryption is the go-to security tool for the majority of IT and data security challenges we face today. Properly implemented, encryption provides obfuscated data that cannot be reversed into the original data value without the encryption key. What’s more, encryption can be applied to any type of data such as first and names, or entire data structures such as a file or database table. And encryption keys can be provided to select users, keeping data secret from those not entrusted with keys. But not all encryption solutions are suitable for a data centric security model. Most forms of encryption take human readable data and transform it into binary format. This is a problem for applications which expect text strings, or databases which require properly formatted Social Security numbers. These binary values create unwanted side effects and often cause applications to crash. So most companies considering data centric security need an encryption cipher that preserves at least format, and often data type as well. Typically these algorithms are applied to specific data fields (e.g.: name, Social Security number, or credit card number), and can be used on data at rest or applied to data streams as information moves from one place to the next. These encryption variants are commercially available, and provide

Share:
Read Post

Incite 7/9/2014: One dollar…

A few weeks ago I was complaining about travel and not being home – mostly because I’m on family vacations and doing work I enjoy. I acknowledged these are first world problems. I didn’t appreciate what that means. You lose touch with a lot of folks’ reality when you are in the maelstrom of your own crap. I’m too busy. The kids have too many activities. There are too many demands on my time.   That all stopped over the weekend. On the recommendation of a friend, I bought and watched Living on One Dollar. It’s a documentary about 4 US guys who went down to a small town in Guatemala and lived on one dollar a day. That was about the median income for the folks in that town. Seeing the living conditions. Seeing the struggle. It’s hard to live on that income. There is no margin for error. If you get sick you’re screwed because you don’t have money for drugs. You might not be able to afford to send your kids to school. If you are a day laborer and you don’t get work that day, you might not be able to feed your kids. If the roof is leaking, you might not have any money to fix it. But you know what I saw in that movie? Not despondency. Not fatalism, though I’m sure some folks probably feel that from time to time. I saw optimism. People in the town were taking out micro-loans to start their own businesses and then using the profits to go to school to better themselves. I saw kindness. One of the only people in the town with a regular salaried job gave money to another family that couldn’t afford medicine to help heal a sick mother. This was money he probably couldn’t spare. But he did anyway. I saw kids who want to learn a new language. They understand they had to work in the fields and might not be able to go to school every year, but they want to learn. They want to better themselves. They have the indomitable human spirit. Where many people would see pain and living conditions no one should have to suffer through, these folks saw optimism. Or the directors of the documentary showed that. They showed the impact of micro-finance. Basically it made me reconnect with gratitude. For where I was born. For the family I was born into. For the opportunities I have had. For the work I have put in to capitalize on those opportunities. Many of us won the birth lottery. We have opportunities that billions of other people in the world don’t have. So what are you going to do with it? I’m probably late the bandwagon, but I’m going to start making micro-loans. I know lots of you have done that for years, and that’s great. I’ve been too wrapped up in my own crap. But it’s never too late to start, so that’s what I’m going to do. So watch the movie. And then decide what you can do to help. And then do it. –Mike The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR April 14 – Three for Five March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Leveraging Threat Intelligence in Incident Response/Management Introduction Endpoint Security Management Buyer’s Guide (Update) Mobile Endpoint Security Management Trends in Data Centric Security Introduction Use Cases Open Source Development and Application Security Analysis Development Trends Application Security Introduction Understanding Role-based Access Control Advanced Concepts Introduction NoSQL Security 2.0 Understanding NoSQL Platforms Introduction Newly Published Papers Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing Not so much Incite 4 U Oh about that cyber-policy… It looks like folks are getting interested in cyber-insurance. At least in the UK. And it’s mainstream news now, given that an article on Business Insider about the market. After the predictable Target breach reference they had some interesting numbers on the growth of the cyber-insurance market. To a projected over $2 billion in 2014. So what are you buying? Beats me. Is it “insurance cover from hackers stealing customer data and cyber terrorists shutting down websites to demand a ransom”? I didn’t realize you could value your data and get reimbursed if it’s stolen. And how is this stuff priced? I have no idea. A professor offers a good assessment: “When it comes to cyber there are lots of risks and they keep changing, and you have a general absence of actuarial material. The question for the underwriter is how on earth do I cover this?” And how on earth do you collect on it? It

Share:
Read Post

Open Source Development and Application Security Survey Analysis [New Paper]

We love data – especially when it tells us what people are doing about security. Which is why we were thrilled at the opportunity to provide a – dare I say open? – analysis of the 2014 Open Source Development and Application Security survey. And today we launch the complete research paper with our analysis of the results. Here are a couple highlights: Yes, after a widely-reported major vulnerability in an open source component used in millions of systems around the globe, confidence in open source security did not suffer. In fact, it ticked up. Ironic? Amazing? I was surprised and impressed. … and … 54% answered “Yes, we are concerned with open source vulnerabilities.” but roughly the same percentage of organizations do not have a policy governing open source vulnerabilities. We think this type of survey helps shed important light on how development teams perceive security issues and are addressing them. You can find the official survey results at http://www.sonatype.com/about/2014-open-source-software-development-survey. And our research paper is available for download, free as always: 2014 Open Source Development and Application Security Survey Analysis Finally, we would like to thank Sonatype, both for giving us access to the survey results and for choosing to license this research work to accompany their survey results! Without their interest and support for our work, we would not be able to provide you with research such as this. Share:

Share:
Read Post

Leveraging Threat Intelligence in Incident Response/Management

It’s hard to be a defender today. Adversaries continue to innovate, attacking software which is not under your control. These attacks move downstream as low-cost attack kits put weaponized exploits in the hands of less sophisticated adversaries, making them far more effective. But frequently attackers don’t even need to use innovative attacks because a little reconnaissance and a reasonably crafted phishing message can effectively target and compromise your employees. The good news is that we find very few still clinging to the hope that all attacks can be stopped by deploying the latest shiny object coming from a VC-funded startup. Where does that leave us? Pretty much where we have been for years. It is still about reacting faster – the sooner you know about an attack the sooner you can start managing it. In our IR fundamentals series and subsequent React Faster and Better paper, we mapped out a process for responding to these incidents completely and efficiently, utilizing tactics honed over decades in emergency response. But the world hasn’t stayed still over the past 3 years – not by a long shot. So let’s highlight a few things shifting the foundation under our (proverbial) feet. Better adversaries and more advanced tactics: Attackers continue to refine their tactics, progressing ever faster attack from to exfiltration. As we described in our Continuous Security Monitoring paper, attackers can be in and out with your data in minutes. That means if monitoring and assessment is not really continuous you leave a window of exposure. This puts a premium on reacting faster. Out of control data: If you haven’t read our paper on The Future of Security, do that now. We’ll wait. The paper explains how the combination of cloud computing and mobility fundamentally disrupts the way technology services are provisioned and delivered. They will have a broad and permanent impact on security, most obviously in that you lose most control over your data, because it can reside pretty much anywhere. So how can you manage incidents when you aren’t sure where the data is, and you may not have seen the attacks before? That could be the topic of the next Mission Impossible movie. Kidding aside, the techniques security professionals can use have evolved as well, thanks to the magic of Moore’s Law. Networks are faster, but we can now capture that traffic when necessary. Computers and devices are more powerful, but now we can collect detailed telemetry on them to thoroughly understand what happens to them. Most importantly, with our increasing focus on forensics, most folks don’t need to argue so hard that security data collection and analysis are critical to effectively responding and managing incidents. More Data As mentioned above, our technology to monitor infrastructure and analyze what’s going on has evolved quickly. Full network packet capture: New technologies have emerged that can capture multi-gbps network traffic and index it near real time for analysis. This provides much higher fidelity data for understanding what attackers might have done. Rather than trying to interpret log events and configuration changes, you can replay the attack and see exactly what happened and what was lost. This provides the kind of evidence essential for quickly identifying the root cause of an attack, as well as the basis for a formal investigation. Endpoint activity monitoring: We introduced this concept in our Endpoint Security Buyer’s Guide and fleshed it out in Advanced Endpoint and Server Protection. This approach enables you to collect detailed telemetry from endpoint devices, so you see every action on the device, including what software was executed and which changes were made – to the device and all its files. This granular activity history enable you to search for attack patterns (indicators of compromise) at any time. So even if you don’t know activity is malicious when it takes place, you can identify it later, so long as you keep the data. A ton of data: The good news is that, between network packets and endpoint telemetry, you have much more more data to analyze. The bad news is that you need technology that can actually analyze it. So we hear a lot about “big data” for security monitoring these days. Regardless of what it’s called by the industry hype machine; you need technologies to enable you to index, search through, and find patterns within the data – even when you don’t know exactly what you’re looking for. Fortunately other industries – like retail – have been analyzing data for unseen and unknown patterns for years, and many of their analytical techniques are now being applied to security. As a defender it is tough to keep up with attackers. But many of these new technologies help to fill the gaps. Technology is no longer the biggest issue for detecting, responding, and managing threats and attacks. The biggest problem is now the lack of skilled security professionals to do the work. In Search of… Responders It seems like every conversation we have with CISOs or other senior security professionals these days turns at some point to finding staff to handle attacks. Open positions stay open for extended periods. These organizations really need to be creative to find promising staffers and invest in training them, even though they often soon move on to a higher-paid consulting job or another firm. If you are in this position, you aren’t unique. Even the incident response specialist shops are resource constrained. There just aren’t enough people to meet demand. The security industry needs to address this on multiple fronts: Education: Continued investment in training people to understand core skills is required. More importantly, these folks need opportunities and resources to learn on the job – which is really the only way to keep up with modern attackers anyway. Automation: The tools need to continue evolving, to make response more efficient and accessible to less sophisticated staff. We are not talking about dumbing down the process, but instead about making it easier and more intuitive so less skilled folks

Share:
Read Post

Increasing the Cost of Compromise

It seems to be all threat intelligence all the time in the tech media, so I might as well jump on the bandwagon. My pals Wendy Nather of 451 and Jamie Blasco of AlienVault recently did a webcast on the topic. Dan Raywood has a good overview of the content. Wendy does the analyst thing and categorizes the different types of threat intelligence. She points out that sharing is taking place, but more slowly than it should. Jamie then makes a compelling case for why everyone should share threat intel when possible. Shared intelligence increases the cost of compromise. …by removing the secretive aspect, (i.e vendors keeping their threat intelligence close to their chests and monetising it – instead of making it freely available) we can force attackers to raise the bar and spend more and more money on their infrastructure, which decreases the return on investment for cyber criminals. Attackers make crazy money leveraging their tactics. They can buy an inexpensive attack kit (with Bitcoins) and use it a zillion times. If you aren’t talking to your buddy, you don’t know what to look for. If you don’t have a list of C&C nodes or patterns of exfiltration, then when they hit you it won’t immediately raise an alarm. And you will lose. By sharing information we can force attackers to change their attacks more frequently. They will need to turn over botnet nodes faster. Let’s cost them more to do business. Can we make enough difference for them to give up and stop attacking? NFW. They will still make a ton of coin, but over a long enough period this kind of information sharing can get rid of less sophisticated attackers who would make more money doing something legit – you know, like gaming search engine results. Photo credit: “Cento’s Prices (Awesome sign)” originally uploaded by Dave Fayram Share:

Share:
Read Post

Trends In Data Centric Security: Use Cases

After a short hiatus we are back with the next installment of our Data Centric Security series. This post will discuss why customers are interested in this approach, and specific use cases they are looking to address. It should be no surprise that all these use cases are driven by security or compliance. What’s interesting is why other tools and technologies do not meet their needs. What prompts people to look for a different approach to data security? Those are the questions we will address with today’s post. NoSQL / Big Data Security The single biggest reason we are asked about data centric security models is “Big Data”: moving information into NoSQL analytics clusters. Big data systems are simply a new type of database that facilitates fast analysis and lookup capabilities on much larger data sets – at a dramatically lower cost – than previously possible. To get the most out of these databases, lots of data is collected from dozens of sources. The problem is that many sources fall under one or more regulatory controls and contain sensitive data, but big data projects are typically started outside regulatory or IT guidance. As the custodians become aware of their responsibility for the NoSQL data and services, they realize they are unable to adequately secure the cluster – or even know exactly what it contains. To aggravate the problem, reporting and data controls within NoSQL databases are often deficient or completely unavailable. But NoSQL databases have proven their value, and offer previously unavailable scale for analytics, meaning genuine value to the organization. Unfortunately they are often too immature for enterprises to fully trust. Data centric security provides critical security for systems which process sensitive data but cannot themselves be fully trusted, so this approach is very attractive for either protecting data before moving it into a big data repository or transforming existing data into something non-sensitive which can be analyzed but does not need to be secured. The term for this process is “data de-identification”. Examples include substitution of an individual’s Social Security Number with a random number that could be an SSN, or a person’s name with a name randomly chosen or assembled from a directory, or a date with a random proximate date. In this way the original sensitive data is removed entirely, but the value of the data set is retained for analysis. We will detail how later in this series. Cloud and Data Governance Most countries have laws on how citizen data must be secured, outlining custodial responsibilities for companies which store and manage it. These laws differ on which data must be secured, which controls are acceptable, and what is required in case of a breach of sensitive data. If your IT systems are all within a single data center, in a single location under your control, you only need worry about your local laws. But cloud computing make compliance much more complex, especially in public clouds. First, cloud service providers are legally third parties, with deliberately opaque controls and limited access for tenants (customers like you). Second, for reliability and performance many cloud data centers are located in multiple geographic locations, with different laws. This means multiple – possibly conflicting – regulations apply to sensitive data, and you share responsibility with your cloud service providers. The legal issues break down into three type: functional, jurisdictional, and contractual. Functional issues include how legal discovery is performed, what happens in the event of a subpoena or legal hold, proof of data guardianship, and legal seizure in multi-tenant environments. Jurisdictional issues require you to understand applicable legislation, under what circumstances the law applies, and how legal processes differ. Contractual issues cover access to data, data lifecycle management, audit rights, contract termination, and a whole heap of other issues including security and vulnerability management. Data governance and legal issues require substantial research and knowledge to implement polices, often at great expense. Many firms want to leverage low-cost, on-demand cloud computing resources, but hesitate at the huge burden of data governance in and across cloud providers. This is a case where data centric security can reduce compliance burdens and resolve many legal issues. This typically means fewer reports, fewer controls, and less complexity to manage. PHI Queries on how to address HIPAA and Protected Health Information (PHI) were almost non-existent a couple years ago, but we are now asked with increasing frequency. Health care data encompasses many different kinds of sensitive data, and the surrounding issues are complex. A patient’s name is sensitive data in some contexts. Medical history, medications, age, and just about every other piece of data is critical to some audiences, but too sensitive to shared with others. Some patients’ data can be shared in certain limited cases, but not in others. And there many audiences for PHI: state and federal governments, hospitals, insurance companies, employers, organizations conducting clinical trials, pharmaceutical companies, and many more. Each audience has its own relevant data subset and restrictions on access. Data centric security is in use today, providing carefully selected subsets of the complete original data to different audiences, and surrogate data for elements which are required but not permitted. As data storage and management systems become cheaper, faster, and more powerful, providing a unique subset to each audience has become feasible. Each recipient can securely access its own copy, containing only its permitted data. Data centric security enables organizations to provide just those data elements which partners need, without exposing data they cannot access. And this can all be done in real time on demand, by applying appropriate controls to transform the original data into the secured subset. Many tools and techniques developed over the last several years for test data management are now employed to generate custom data sets for individual partners on an ongoing basis. Payment Card Security Tokenization for credit card security was the first data centric security approach to be widely accepted. Hundreds of thousands of organizations replace credit card numbers with data surrogates. Some

Share:
Read Post

Incite 7/2/2014 — Relativity

As you get older time seems to move faster. There may be something to these theories of Einstein. It’s hard to believe that yesterday was July 1. That means half of 2014 is in the rear view mirror. HALF. That’s unbelievable to me. Time is flying at the speed of light. I look at the list of things I wanted to do and it’s still largely unfinished. I did a bunch of things I didn’t expect to be doing. Though I guess that’s always the case. Back when I was flying solo at Security Incite, I would revisit my trends for the year and see what I got right and what not so much. We don’t do formal trends, though we do post our ideas for the coming year in our RSA Guide. We don’t really go back and check on those, so maybe I’ll do that over winter break. But right now, there is other work to be done. You see we are all in the maelstrom. It has been a crazy 6 months. The business keeps increasing in scale. We don’t. So it’s been sleep that fell off my table. I’m holding up pretty well, if I do say so myself. Maybe there is something to this healthy mindful lifestyle I’m working toward. Though I’m very cognizant of the fact these are first world problems. And on a relative basis, things probably couldn’t be going much better. Not while allowing us the flexibility we have running our own business. And no, I’m definitely not looking for sympathy that I’m working with great clients, doing cool projects. That my research agenda, which candidly was pretty opportunistic, turned out to be pretty close to what’s happening. That 5 years in our clients know what we do and how we do it, and continue to come back for me. These are good problems to have. It’s a good gig, and we all know it and are very thankful. But there is always that little voice in the back of my head. That little reminder that what goes up, eventually comes down. I have been around too long to think I have figured out how to suspend the laws of physics. That Einstein guy again! Bah! To be clear, I’m not doing this in a fearful or paranoid way. It’s not about me being scared that something will go wrong. It’s about wanting to be ready when it does. So I let my unconscious mind churn through the scenarios. While meditating I will indulge my internal planner for a short time to make sure I know how to respond. And then I let it go. The good news is this doesn’t consume me – not in the least. I’m not naive, so I know you need to assess all the possibilities. But I don’t assess them for long. I mean who has time for that? –Mike Photo credit: “Speed of Light” originally uploaded by John Talbot The fine folks at the RSA Conference posted the talk Jennifer Minella and I gave on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR April 14 – Three for Five March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide (Update) Mobile Endpoint Security Management Trends in Data Centric Security Introduction Open Source Development and Application Security Analysis Development Trends Application Security Introduction Understanding Role-based Access Control Advanced Concepts Introduction NoSQL Security 2.0 Understanding NoSQL Platforms Introduction Newly Published Papers Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing Incite 4 U Sell yourself: Epic post by Dave Elfering about the need to sell. Everyone sells. No matter what you do you are selling. In the CISO context you are selling your program and your leadership. As Dave says, “To truly lead and be effective people have to be sold on you; on what and who you are.” Truth. If your team (both upstream / senior management and downstream / security team) isn’t sold on you, you can’t deliver news they need to hear. And you’ll be delivering that news a lot – you are in security, right? That post just keeps getting better because it discusses the reality of leading. You need to know yourself. You need to be yourself. More wisdom: “Credentials and mad technical skills are great, but they’re not who you are. Titles are great, but they’re not who you are. Who you are is what you truly have to sell and the leader who instead relies on Machiavellian methods to self-serving ends is an empty suit.” If you can’t be authentic you can’t lead. Well said, Dave. – MR Security pin-up: Australia plans a rollout of PIN (Personal Identification Number) codes for credit and debit card transactions later this year. The Australian payment processors association’s current report shows total card fraud rates have doubled between 2008 and 2013. While the dollar amount per

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.