Securosis

Research

Security Analytics with Big Data: Use Cases

Why do we use big data for security analytics? Aside from big data hype in the press, what motivates customers to look for new solutions? On the other side of the coin, why are vendors altering their products to use – or at least integrate with – big data? In our discussions with customers they cite performance and scalability, particularly for security event analysis. In fact this research project was originally outlined as a broad examination of the potential for big data for security analytics. The customers we speak with don’t care about generalities – they need to solve existing problems, specifically around installed SIEM and log management systems. We refocused this research on a focused need to scale beyond what they have today and get more from existing investments, and big data is a means to that end. Today’s post focuses on the customer use cases and delves into why SIEM, log management, and other event-centric monitoring systems struggle under evolving requirements. Data velocity and clustered data management are new terms in IT, but they define two core characteristics of big data. This is no coincidence – as IT practitioners learn more about the promise of big data they apply its capabilities to the problems of existing SIEM solutions. The inherent strengths of big data overlap beautifully with SIEM deficiencies in the areas of scalability, analysis speed, and rapid data insertion. And given the potential for greater analysis capabilities, big data is viewed as a way to both keep pace with exploding volumes of event data and do more with it. Specific use cases drive interest in big data. Big data analytics are expanding, and complement SIEM. But the reason it is such a major trend is that big data addresses important issues in existing platforms. To serve prospective buyers we need to understand the issues that drive them to investigate new products and solutions. The basic issues above are the ones that always seem to plague SIEM: scaling, efficiency, and detection of threats – but those are generic placeholders for more specific demands. Use Cases More (Types of) Data – The problem we heard most often was “We need to analyze more types of data to get better analysis”. The need to include more data types, beyond traditional netflow and syslog event streams, is to derive actionable information from the sea of data. Threat intelligence is not not a simple signature and detection is more complex than reviewing a single event. Communications data such as Twitter streams, blog comments, voice, and other rich data sources are unstructured and require different parsing algorithms to interpret. Netflow syslog data is highly structured, with each element defined by its location within a record. Blog comments, phishing emails, botnet C&C, or malicious files? Not so much. The problems accommodating more types of data are scalability and usability. First, adding data types means handling more data, and existing systems often can’t handle any more. Adding capacity to already taxed systems often requires costly add-ons. Rolling out additional data collectors and servers to process their output data takes months, and the cost in IT time can be prohibitive as well. That all assumes the SIEM architecture can scale up to greater volumes of data coming in faster. Second, many of these systems cannot handle alternative data types – either they normalize the data in a way that strips much of its value or the system lacks suitable tools for analyzing alternate (raw) data types. Most systems have evolved to include configuration management and identity information, but they don’t handle Twitter feeds or diverse threat intelligence. Given evolving attack profiles, the flexibility to capture and dig into any data type is now a key requirement. Anti-Drill-Down – We have seen steady advances in aggregation, correlation, dashboards, and data enrichment to help security folks identity security threats, faster. But these iterative advancements have not kept pace with the volume of security data that needs to be parsed, nor the diversity of attack signatures. Overall situational awareness has not improved and the signal-to-noise ratio has gotten worse instead of than better. The entire process – the entire mindset – has been called into question. Today the typical process is as follows: a) An event or combination of events that looks interesting is captured. b) SIEM correlates and enriches data to provide better context, analyzes data in terms of rules, and generates an alert if it detects an anomaly. c) To verify that a suspicious event is indeed a threat, generally a human must “drill down” into a combination of machine-readable and human-readable data to make sense of it. The security practitioner must cross reference-multiple data sources. Enrichment is handy but too much manual analysis is still required to weed through false positives. In many cases the analyst extracts data to run other scripts or tools to produce the final analysis – we have even seen exports to MS Excel to find outliers and detect fraud. We need better analytics tools with more options than simple SQL queries and pattern matching. The types of analysis SIEMs can perform are limited, and most SIEM solutions lack programatic extensions to enable more complex analysis. “The net result is we always get a blob of stuff we have to sift through, then verify, investigate, validate and, often adjust the policy to filter our more detritus.” The anti-drill-down use case offers more automated checking using more powerful analytics and data mining tools than simple scripts and SQL queries. Architectural Limitations – Some customers attribute their performance issues – especially lagging timely threat analysis – to SIEM architecture and process. It takes time to gather data, move it to a central location, normalize, correlate, and then enrich. This generally makes near-real-time analysis a fantasy. Queries run on centralized event servers, and often take minutes to complete, while compliance reports generally take hours. Some users report that the volume of data stresses their systems, and queries on relational servers take too long to complete. Centralized computation limits the speed and timelines of analysis and reporting. The current

Share:
Read Post

Socially engineering (trading) bots

It probably went unnoticed by most of the security community, but yet another Twitter hack this week exposed more flaws with high frequency trading systems. When someone took control of the Associated Press twitter account and injected a fake news announcement that bombs had exploded in the White House, many people (unsurprisingly) believed the tweet without attempting to verify. That a 140-character message sent the stock market down in a “flash crash” – 140 points in a matter of minutes. From CNN Money: One scary – and false – tweet, and the Dow quickly plunged 140 points, or roughly 1%. Many are pointing fingers at high speed trading by computers for the swift decline. The Dow quickly bounced back. The sharp sell-off highlights just how disruptive computer-driven high-frequency trading can be. The S&P 500 lost $121 billion of its value within minutes. High-speed computer trading accounts for roughly 50% of all trading. That’s down slightly from a few years ago, but traders on the ground say it feels more dominant. And mini flash crashes have become an all too familiar daily occurrence. Those of you who set limit orders on stocks at below-market prices, have been the unintended beneficiaries of some briefly well-priced stocks. A simple compromise of an outdated identity management system was leveraged for social engineering, which in turn triggered a domino effect across automated trading systems, which moved the whole stock market twice – the drop and the rebound. The perpetrators have not been identified so it is not clear whether it was just for the lulz but they certainly had an impact. The BATS exchange spokesperson who called this a non-issue is way off the mark – it is clear that both Twitter’s identity management and trading bot logic need serious reworking. Share:

Share:
Read Post

Friday Summary, April 26, 2013: Birthday Edition

On March 13th I received a birthday card. It was from my Dad. It was a nice card, it was clear he had put some thought into the card selection, and I was genuinely swayed by his thoughtful memento. On the Ides of March I received a birthday card from my grandmother. Another nice card and it was thoughtful that she remembered my birthday. Two weeks later a birthday gift arrived from my mother. Not for me, mind you, but for my wife. It was a beautiful gift, obviously expensive, and again a superbly wonderful gesture. We don’t get to keep in close contact, so I was both surprised and appreciative. April 1st a gift card arrived, this time for me, again from my mom. There is not much to this story unless you know a couple additional facts. First, all three of the aforementioned blood relatives live under the same roof. Second, my birthday is in April; this week, in fact. My wife’s is another month away. And they have not sent my wife a birthday gift in, well, at least 20 years. As it is with human nature, gifts and cards arriving on seemingly random dates makes you wonder what’s up. You question motivation. Are they OK? And for the first time I started to worry about my parents’ health and well-being. Were they forgetting the date? Did they know what date it was? Jokingly my wife has said ‘Happy Birthday’ to me each day since March 13th. To make a long story short, a phone call cleared up the situation and all is well. I think that my parents just happened to find gifts they liked and sent them, dates be damned. Which is what you do when you think the person will really like the gift and you can’t wait to give it to them. Given my profession – it’s certainly not a job – where segregation between work and … well, that’s the point. My life and my work are not separate. The two are fully merged. There is no such thing as a work day, and there is no such thing as a day off. I work weekends, I don’t really do vacations, but on the plus side I do try to make the best of every day. When I want to do something I do it, and adjust work/life accordingly. All of which makes me realize that the gifts and cards from my relatives were nice, but I was ambivalent. But the idea that a specific date did not matter struck me as profound. Why limit your ability to celebrate? In that spirit I decided, what the heck, my birthday would not be a single day. I decided I would declare the entire week birthday week, and decide to do one fun birthday related event every day. Birthday cake each and every day. Over-the-top dinner each night. One outing every day. One thing I have wanted to accomplish every day this week. And because work/life does not go away, each day I have averaged 4-5 hours of work, as evidenced by my writing this post, and why a couple of you got wine-infused replies to various email and phone calls last night (you know who you are). The experiment is thus far a success, and each day offered extra time away from the computer to have some fun. This is working so well that I will do it every year going forward. Happy Birthweek! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Database Blocking. Favorite Securosis Posts Adrian Lane: How to Use the 2013 Verizon Data Breach Investigations Report. Rich has put a lot of thought into his analysis and offers a unique perspective. David Mortman: Big Data Security Jazz. Mike Rothman: CipherCloud Loses Argument with Internet. Rich: Teaching Updated Cloud Security Class at Black Hat USA. Jamie and I are working on added material to make the class truly worthy of Black Hat. Other Securosis Posts Incite 4/24/2013: F Perfect. Question everything, including the data. The CISO’s Guide to Advanced Attackers: Verify the Alert. Security Analytics with Big Data [New Series]. The CISO’s Guide to Advanced Attackers: Mining for Indicators. Token Vaults and Token Storage Tradeoffs. No news is just plain good: Friday Summary, April 18, 2013. Favorite Outside Posts David Mortman: Cryptography is a systems problem (or) ‘Should we deploy TLS’. Adrian Lane: Why You Should Overload WebSite Errors. Are you paying attention, developers? This is not security through obscurity – it’s about not handing data to adversaries so they can hack your site. James Arlen: How I Got Here: Chris Hoff. Mike Rothman: Sriacha hot sauce purveyor turns up the heat. Rich: Just How Did Apple “Journalism” Get This Bad? While Ian writes this specifically about Apple, it also applies to a lot of security writing. Project Quant Posts Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Top News and Posts PC owners have to watch 24 sources for fixes CISPA cybersecurity bill Privacy advocates warn about coming tsunami of surveillance cameras London already knows the result – cameras don’t deliver. Silicon Valley companies quietly try to kill Internet privacy bill Twitter has 2-factor authentication. Brad Arkin promoted to CSO of Adobe. Brad is as good as they get, this is great news for all of us. Blog Comment of the Week This week’s best comment goes to @VZDBIR, in response to How to Use the 2013 Verizon Data Breach Investigations Report. I am breaking with tradition this week to favorite a tweet: @VZDBIR: Sometimes it’s scary how @securosis gets all up in my brain. Those guys are smart. #Dbir https://t.co/kV995yrxUX I would bet that Twitter account, like the Associate Press, was hacked. Share:

Share:
Read Post

Big Data Security Jazz

I tend to avoid “security jazz” blog posts – esoteric arguments contrasting what we should be doing in security against what we do today. These rants don’t really help IT professionals get their jobs done so I skip them. But this is going to be such a post because I need to talk about big data security approaches. Many of you will to stop reading at this point. But for you data architects, CISOs, and security product development teams learning about how to plan for big data security (particularly those of you who have been asking me lately) and wanting to understand the arcane research that influences my recommendations, read on. I got started on this topic by considering what big data security will look like in coming years. I was reacting to the apparently random recommendations in the general security press. I eventually decided that this is simply unknown. I can’t fairly slam the press for their apparently moronic recommendations, because I cannot be sure they will not be correct in the future. Stock picking monkeys have made fools of professional traders, and it is likely to happen again with big data security predictions. As big data continues its metamorphosis – in data storage, data and node management, system orchestration, and query methods – the ways we secure these clusters will change. A series of industry research papers (PDF), blog posts, and academic research projects on big data convince me that we are still very early in big data’s evolution. In each case we see some evolutionary changes (such as the Berkeley AMPLab’s Spark product), as well as some total rethinks of how to do analysis with big data (such as Google’s Pregel). I am raising this topic on here because I think merits an open discussion. I am being asked frequently how to approach big data security, and given that big data currently looks like Hadoop and Cassandra, there are specific actionable steps that make sense for these types of clusters. But for someone architecting security products, this model might well be obsolete by the time the product goes live. Based upon research findings from last year things like masking, encryption, tokenization, identity management, and API security all make sense in Hadoop. When I speak with vendors who are looking to design big-data-specific security products, I need to caveat all recommendations with “as far as we know today”. I certainly cannot say that in 5 years anyone will still be using Hadoop. I guess Hadoop will still be a big player, but who knows? It could be Dremel, a SQL-like system, in which case we will be able apply many techniques we have already evolved for relational stores. If fashion dictates a Pregel-like ant swarm of worker threads, not so much. Here is where I come to the predictions and recommendations. I would like to recommend that you embed as much security into the application layer as you can. That’s the best place to control access and control who can see what. The application is the gateway to the data, where you can abstract away many underlying data management layer complexities to focus on user rights and business logic enforcement. Application-layer controls also scale security with the application. These are reasons I think (Updated) Intel Mashery, Axway Vordel, and CA Layer7 are important. But we cannot yet tell where big data is going – we don’t know what applications, orchestration, queries, data storage, or even architectures will look like going forward – so it is impossible to know whether any security model we design will be absurd in a few years. The safe approach, based upon the uncertainty of big data environments, would be to protect at the data layer. That means using encryption, masking, and tokenization technologies that don’t expose sensitive data to big data environments. Making that work currently requires big data security clusters fronting big data analytics clusters – not terribly efficient, and you need another cluster (perhaps twice as many, depending on scale). Then I realize that IT folks, trying to get their jobs done, will ignore all this overly abstract mumbo-jumbo and fall back on what they are comfortable with: the encapsulation/walled garden model of security. Put a firewall “in front” of the cluster, sealing it off (virtually) from the rest of IT. Hard firewall shell on the outside, chewy lack of security on the inside. At this point we appreciate the Jacquith/Hoff Security Hamster Sine Wave of Pain model as a useful tool. You can show how each of these choices is right … and wrong. We will play catch-up because we have no choice in the matter. Share:

Share:
Read Post

Security Analytics with Big Data [New Series]

Big Data is being touted as a ‘transformative’ technology for security event analysis – promised to detect threats in the ever-increasing volume of event data generated from in-house, mobile, and cloud-based services. But a combination of PR hype, vendor positioning, and customer questions has pushed it to the top of my research agenda. Many customers are asking “Wait, don’t I already have SIEM for event analysis?” Yes, you do. And SIEM is designed and built solve the same problems – but 7-8 years ago – and it is failing to keep up with current problems. It’s not just that we’re trying to scale up to a much larger set of data, but we also need to react to events an order of magnitude faster than before. Still more troubling is that we are collecting multiple types of data, each requiring new and different analysis techniques to detect advanced attacks. Oh, and while all that slows down SIEM and log management systems, you are under the gun to identify attacks faster than before. This trifecta of issues limit the usefulness of SIEM and Log Management – and makes customers cranky. Many SIEM platforms can’t scale to the quantity of data they need to manage. Some are incapable of even storing basic data as fast as it comes in – forget about storing and analyzing non-standard data types. ‘Real-time’ analysis is a commonly cited as SIEM feature but after collection, storage, normalization, correlation, and enrichment, you are lucky to access new events within an hour – much less within a minute. The good news is that big data, correctly deployed, can solve these issues. In this paper we will examine how big data addresses scalability and performance, improves analysis, can accommodate multiple data types, and will be leveraged with existing environments. Or goal is to help users differentiate reality from wishful thinking, and to provide enough information to make informed purchasing decisions. To do this we need to demystify big data and contrast how it differs from traditional data management systems. We will offer a clear and unique definition of big data and explain how it helps overcome current technical limitations. We will offer a pragmatic way for customers to leverage big data, enabling them to select a solution strategically. We will highlight the limitations of SIEM and Log Management, key areas of customer dissatisfaction, areas where big data excels in comparison. We will also discuss some changes required for big data analysis and data management, as well as a change in mindset necessary to take full advantage. This is not all theory and speculation – big data is currently being employed to detect security threats, address new requirements for IT security, and even help gauge the effectiveness of other security investments. Big data natively addresses ever-increasing event volume and the rate at which we need to examine new events. There is no question that it holds promise for security intelligence, both in the numerous ways it can parse information and through its native capabilities to sift proverbial needles from monstrous haystacks. Cloud and mobile architectures force us to reexamine how we manage security data, and to scale across broader sets of systems and events – neither of which mesh with the structured data repositories on which most organizations rely. But most IT and security practitioners do not yet fully understand big data or how to employ it so they are unable to weed through all the hype, FUD, and hyperbole. To take full advantage, however, requires both a deeper understanding of the technology and a subtle shift in mindset to enable informed decisions on incorporate big data into existing IT systems, perhaps by shifting to newer big data platforms. This research paper will highlight several areas: Use Cases: We will discuss issues customers cite with performance and scalability, particularly for security event analysis. We will discuss in detail how SIEM, Log Management, and event-centric systems struggle under new requirements for data velocity and data management, and why existing technologies aren’t cutting it. We will also discuss the inflexibility of pre-BD analysis, alerting, and reporting – and how they demand a new approach to security and forensics, as we struggle to keep pace with the evolution of IT. New Events and Approaches: This post will explain why we need to consider additional data types that go beyond events. Existing technologies struggle to meet emerging needs because threat data does not conform to traditional syslog and netflow event types. There is a clear trend toward broader data analysis to detect advanced attacks and better understand risks. What is Big Data and how does it work? This post will offer a basic definition of big data, along with a discussion of the native capabilities that make big data different than traditional analysis tools. We will discuss how features like HDFS, MapReduce, Hive, and Pig work together to address issues of scale, velocity, performance, and multiple data types. The promise of big data: We will explain why big data is viewed as a disruptive technology for security analytics. We will show how big data solutions mitigate problems and change security and event analysis. We will discuss how big data platforms handle collecting and parsing event data, and cover different queries and reports that support new threat analyses. How big data changes security platforms: This post will discuss how to supplement existing systems – through standalone instances, partial integration of big data with existing systems, systems that natively leverage big data infrastructure, or fully integrated systems that run atop NoSQL structures. We will also discuss operational changes to SIEM usage, including the growing importance of data scientists to security. Integration roadmap and planning: In this section we will address the common concerns, limitations, and realities of merging big data into your IT systems. Specifically, we will discuss: Integration and deployment issues Platform selection (diversity of platforms and data) Policy and report development Data privacy and sharing Big data platform security basics Our next post will cover use cases, the key areas where SIEM needs to improve,

Share:
Read Post

Token Vaults and Token Storage Tradeoffs

Use of tokenization continues to expand as customers look to simplify PCI-DSS compliance. With this increased adoption comes a lot of vendor positioning and puffery, as they attempt to differentiate their products in an increasingly competitive market. Unfortunately this competitive positioning often causes confusion among buyers, which is why I have spent the last couple mornings answering questions on FPE vs. Tokenization, and the difference between a token vault and a database. Lately most questions center on differentiating tokenization data vaults, with the expected confusion caused by vendor hyperbole. In this post I will define a token vault and shed some light on their pros and cons. My goal is to help you determine as a consumer whether vaults are something to consider when selecting a tokenization solution. A token vault is where you store issued tokens and the credit card numbers they represent. The storage location is called a “token vault”. The vault typically contains other information, but for this discussion just think of the token vault as a long list of CC#/token pairs. A new type of solution called ‘stateless’ or ‘vault-less’ tokenization is now available. These systems use derived tokens, which can be recalculated from some secret value, and those do not need to be stored in a database. Recent press hype claims that token vaults are bad and you should stay away from them. The primary argument is “you don’t want a relational database as a token vault” – or more specifically, “an Oracle database makes a slow and expensive token vault, and customers don’t want that”. Not so fast! The issue is not clear-cut. It’s not that token vaults are good or bad, but of course there are tradeoffs. Token vaults are fine for many types of customers, but not suitable for others. There are three issues at the heart of this debate: cost, scale, and performance. Let’s take a closer look at each of them. Cost: If you are going to use an Oracle, IBM DB2, or Microsoft SQL Server database for your token vault, you will need a license for the database. And token vaults must be redundant so you will need at least a couple licenses. If you want to ensure that your tokenization system can handle large bursts of transactions – such as holiday shopping periods – you will need hefty servers. Databases are priced based on server capacity, so these licenses can get very expensive. That said, many customers running in-house tokenization systems already have database site licenses, so for many customers this is not an issue. Scale: If you have data processing sites where token servers are dispersed across remote data centers that cannot guarantee highly reliable communications, synchronization of token vaults is a serious issue. You need to ensure that credit cards are not misused, that you have transactional consistency across all locations, and that no token is issued twice. With ‘vault-less’ tokenization synchronization is a non-issue. If consistency across a scaled tokenization deployment is critical derived tokens are incredibly attractive. But some non-derived token systems with token vaults get around this issue by pre-allocating token sequences; this ensures tokens are unique, and synchronization latency is not a concern. This is a critical advantage for very large credit card processors and merchants but not a universal requirement. Performance: Some token server designs require a check inside the token vault prior to completing every transaction, in order to ensure to avoid duplicate credit cards or tokens. This is especially true when a single token is used to represent multiple transactions or merchants (multi-use tokens). Unfortunately early tokenization solutions generally had poor database architectures. They did not provide efficient mechanisms of indexing token/CC# pairs for quick lookup. This is not a flaw in the databases themselves – it was a mistake made token vault designers as they laid out their data! As the number of tokens climbs into the tens or hundreds of millions, lookup operations can become unacceptably slow. Many customers have poor impressions of token vaults because their early implementations got this wrong. So very wrong. Today lookup speed is often not a problem – even for large databases – but customers need to verify that any given solutions meets their requirements during peak loads. For some customers a ‘vault-less’ tokenization solution is superior across all three axes. For other customers, with deep understanding of relational databases… security, performance, and scalability are just part of daily operations management. No vendor can credibly claim that databases or token vaults are universally the wrong choice, just like that nobody can claim that any non-relational solution is always the right choice. The decision comes down to the customer’s environment and IT operations. I am willing to bet that the vendors of these solutions will have some additional comments, so as always the comments section is open to anyone who wants to contribute. Share:

Share:
Read Post

Intel Buys Mashery, or Why You Need to Pay Attention to API Security

Intel acquired API management firm Mashery today. readwrite enterprise posted a very nice write-up on how Mashery fits into the greater Intel strategy: Intel is in the midst of a shift away from just selling chips to selling software and services. This change, while little-noticed, has been long in the making. Intel bought McAfee for $7.7 billion in 2010, putting it into the security-software business. In 2005, Intel bought a smaller company, Sarvega, which specialized in XML gateways. (XML, or extensible markup language, is a broad descriptor of a file format commonly used in APIs; an XML gateway transports files to make APIs possible.) Ideally, Intel might sell the chips inside the servers running the software programs that communicate via these APIs, too. (It has a substantial business selling such chips.) But what’s more important is the notion that Intel has a product offering that speaks to innovative startups, not just struggling PC manufacturers. With the shift in the market from SOAP to REST over the last several years, and the explosion of APIs for just about everything, especially cloud and web services, tools like Mashery help both with the transformation and with gluing all the bits together. Because you can decide which bits of the API to expose and how, Mashery is a much more services-oriented way to manage which features – and what data – are exposed to different groups of users. It is an application-centric view of security with API management as the key piece. Stated another way, Intel is moving away from the firewall and SSL security model we are all familiar with. Many in the security space don’t see Intel as a player, despite its acquisition of McAfee. But Intel has been quietly developing products for tokenization, identity services, and security gateways for some time. Couple that with API security, and you start to get a clear picture of where Intel is headed – which is distinctly different than what McAfee offers for endpoints and back offices today. Share:

Share:
Read Post

Friday Summary: April 12, 2013

Ever start a simple project – or perhaps ask for something simple to be done on your behalf – and get far more than you bargained for? Sometimes the seemingly simple things reach up and bite you. I was thinking about this two weeks ago, in the middle of some weekend gardening, expecting to tackle a small irrigation leak that popped up during the winter. I went out to the yard with the handful of tools I would need and started scouting around the pool of standing water to locate the source of the leak, and I found it – more or less. It was buried under some mud, so before I could fix the leak I needed to remove the mud around the irrigation line. Before I could remove the mud I needed to remove the giant rat’s nest on top of the mud – stuffed full of Cholla. Literally. It apperas a rat ate the irrigation line and then used it as a private port-o-let. But in order to remove the rat’s nest I needed to remove the 45 lbs of prickly pear cactus that formed the roof of the rat’s nest. Before I could remove that cactus, I needed to remove the 75 lb Agave that arched over the prickly pear. Before I could get to the agave I needed to remove a dead vine. Before I could cut out the vine I need to remove some tree branches. Each step required a new trip to the garage to collect another tool. And so it went for the next three hours, until I finally found the line and fixed the leak. When I finally finished that sequence I was rewarded with 30 minutes tweezing prickly pear micro-thorns from my fingers. What should have taken minutes took the entire morning, and left painful reminders. Which brings me to IT: those who provision data centers and migrate backbone business applications know exactly what this feels like – as I was reminded when I told a couple friends about my experience, and they laughed at me. That described their life. They deal with layers of operational, security, regulatory, and budgetary hurdles – mixed liberally with rat droppings – all the time. Someone asks for a small server to host a small web portal and before you know it someone is asking how PCI compliance will be addressed. Say what you will about cost savings being a driver for cloud services – simplicity (or at least avoidance of complexity) is a major driver too. Sometime it’s just better to have a third party do it on your behalf – and that comes (anonymously of course) from some IT professionals. On to the Summary: Favorite Securosis Posts Gal: Security FUD hits investors. HP bought ArcSight, right? Adrian Lane: Gaming the Narcissist. Fun read, and a topic to consider when weighing potential employers, but I’ll offer an alternative view: 1980 to 2008 was itself a wild period for company performance – see Warren Buffet’s speech from November 1999 for what I mean. I’d say Narcissist CEOs succeeded or simply ran off the tracks faster in that window. David Mortman: Should the Red (Team) be dead? Mike Rothman: Should the Red (Team) Be dead? Yup, it’s mine, but this one created a bit of discussion and even a comment by HD Moore… Other Securosis Posts Incite 4/10/2013: 103. Friday Summary, Gattaca Edition: April 5, 2012. Favorite Outside Posts Rich: Analyzing Malicious PDFs or: How I Learned to Stop Worrying and Love Adobe Reader (Part 1). Adrian Lane: Oracle Details Big Data Strategy. The FUD, it burns, it burns! My not favorite post this week – I recommend you, with your best Borat impersonation, yell not! after every quote and claim. It’s fun and more accurately reflects what’s happening in the big data market. Gal: Alleged Carberp Botnet Ringleader Busted. They’re doing it wrong: Rule #1. You’re supposed to steal from countries where you do not reside, and with whom your home country has no extradition treaty. Rule #2. Don’t steal tons of money from Russian and Ukranian banks regardless of where you live, but especially if you’re violating rule #1 and you live in Russia or Ukraine… Dave Lewis: Secrets of FBI Smartphone Surveillance Tool Revealed in Court Fight. Gunnar: Bitcoin – down ~50% in a day, first DDoS currency crash. David Mortman: Tor Hidden-Service Passive De-Cloaking. Mike Rothman: Who Wrote the Flashback OS X Worm? All of you aspiring security researchers can once again thank Brian Krebs for showing you how it’s done. And be thankful Krebs has figured out how to make a living from doing this great research and sharing it with us. Project Quant Posts Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Security Lessons from the Big DDoS Attacks. A couple weeks old but I just saw it. Bitcoin crashes – lose 1/2 value. Vudu resets user passwords after hard drives lost in office burglary. DEA Accused Of Leaking Misleading Info Falsely Implying That It Can’t Read Apple iMessages. Windows XP still maintains 39% overall market share. Speechless. Windows XP Security Updates ending in one year. Update your calendars! North Korean military blamed for “wiper” cyber attacks against South Korea. Lessons from the Spamhaus DDoS incident. Microsoft Reportedly Adding Two-Factor Authentication to User Accounts. Google will fight secretive national security letters in court. FBI’s Smartphone Surveillance Tool Explained In Court Battle. IsoHunt Demands Jury Trial. Critical Fixes for Windows, Flash & Shockwave via Krebs. Blog Comment of the Week This week’s best comment goes to HD Moore, in response to Should the Red (Team) be dead? It isn’t clear why Gene believes that CTF contests have any correlation to professional red teams. A similar comparison would be hackathons to software engineering. In both cases you approach the problem differently and the participants learn a

Share:
Read Post

Friday Summary: March 29, 2013

Our last nine months of research into identity and access management have yielded quite a few surprises – for me at least. Many of these new perspectives I have shared piecemeal in various blogs, and others not. But it occurred to me today, as we start getting feedback from the dozen or so IAM practitioners we have asked to critique our Cloud IAM research, that some key themes have been lost in the overall complexity of the content. I want to highlight a few points that really hit home with me, and which I think are critical for security professionals in general to understand. BYOD. MDM. MAM. That’s all BS. Mobile security is fundamentally an identity problem. Once you appreciate that a smartphone is essentially a multi-tenant smart card, you start to get a very different idea what mobile security will ultimately look like. How very little IAM and security people – and their respective cultures – overlap. At the Cloud Identity summit last year, the security side was me, Gunnar, and I think one other person. The other side was 400 other IAM folks who had never been to RSA before. This year at the RSA Conference was the first time I saw so many dedicated identity folks. Sure RSA, CA, Oracle, and IBM have had offerings for years, but IAM is not front and center. These camps are going to merge … I smell a Venn diagram coming. Identity is as glamorous as a sidewalk. Security has hackers, stolen bank accounts, ATM skimmers, crypto, scary foreign nationals, Lulz, APT, cyberwar, and stuff that makes it into movies. Identity has … give me a minute … thumbprint scanners? Anyone? Next time security complains about not having a “seat at the management table”, just be thankful you have C-level representation. I’m not aware of a C-level figure or Identity VP in any (consumer) firm. Looking back at directory services models to distribute identity and provide central management … what crap. Any good software architect, even in the mid-90s, should have seen this as a myopic model for services. It’s not that LDAP isn’t a beautifully simplistic design – it’s the inflexible monolithic deployment model. And yet we glued on appendages to get SSO working, until cloud and mobile finally crushed it. We should be thankful for this. Federation with mobile is disruptive. IT folks complain about the blurring of lines between personal and corporate data on smartphones. Now consider provisioning for customers as well as employees. In the same pool. Across web, mobile and in-house systems. Yeah, it’s like that. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Database Security Restart. Adrian’s DR post. Follow The Dumb Security Money. Mike’s DR post. Who has responsibility for cloud security? Mike appears in a NetworkWorld roundtable, and doesn’t say anything (too) stupid. Imagine that! Adrian’s DR paper: Security Implications Of Big Data. Favorite Securosis Posts Adrian Lane: Developers and Buying Decisions. Yeah, it’s my post, spurred by Matt Asay’s piece on how cost structures are changing tech sales. I should have split it into two posts, to fully discuss how Oracle is acting like IBM in the early 90s, and then the influence of developers on product sales. Mike Rothman: Developers and Buying Decisions. Adrian predicts that developers may be more involved in app security buying decisions. What could possibly go wrong with that? Rich: Developers and Buying Decisions. Fail to understand the dynamics and economics around you, and you… er… fail. David Mortman: Defending Cloud Data: IaaS Encryption. Gal Shpantzer: Who’s Responsible for Cloud Security? Other Securosis Posts DDoS Attack Overblown. Estimating Breach Impact. Superior Security Economics. Incite 3/27/2013: Office Space. Server Side JavaScript Injection on MongoDB. How Cloud Computing (Sometimes) Changes Disclosure. Identifying vs. Understanding Your Adversaries. Apple Disables Account Resets in Response to Flaw. Friday Summary: March 22, 2013, Rogue IT Edition. Favorite Outside Posts Rich: What, no Angry Birds? Brian Katz nails it – security gets the blame for poor management decisions. I remember the time I was deploying some healthcare software in a clinic and they asked me to block one employee from playing EverQuest. I politely declined. Gal Shpantzer: Congress Bulls Into China’s Shop David Mortman: Top 3 Proxy Issues That No One Ever Told You. Mike Rothman: You Won’t Believe How Adorable This Kitty Is! Click for More! Security is about to jump the shark. When social engineering becomes Wall Street Journal fodder we are on the precipice of Armageddon. It doesn’t hurt that some of our buddies are mentioned in the article, either… Adrian Lane: Checklist To Prepare Yourself In Advance of a DDoS Attack. A really sweet checklist for DDoS preparedness checklist. Dave Lewis: ICS Vulnerabilities Surface as Monitoring Systems Integrate with Digital Backends. Don’t know if it’s real, but it is funny! Project Quant Posts Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Spamhaus DDoS Attacks Evernote: So useful, even malware loves it. Evernote as botnet C and C. Google glasses. Just friggin’ funny! Your WiFi-enabled camera might be spying on you “Browser Crashers” Hit Japanese Users Victim of $440K wire fraud can’t blame bank for loss, judge rules. This is going to be a hot topic for the next several years. FBI Pursuing Real-Time Gmail Spying Powers as “Top Priority” for 2013 Amazing Plaintext Password Blunder Chaos Communication Camps. Or should that be Kamps? “Lucky Thirteen” Attack MI5 undercover spies: People are falsely claiming to be us. This has occurred a few times before. GCHQ attempts to downplay amazing plaintext password blunder Slow Android Phone Patching Prompts Vulnerability Report Lawyer hopeful of success with secure boot complaint Cyberbunker’s Sven Kamphuis says he is victim of conspiracy over Spamhaus attack One in six Amazon S3 storage buckets are ripe for data-plundering That Internet War Apocalypse Is a

Share:
Read Post

Developers and Buying Decisions

Matt Asay wrote a very though provoking piece on Oracle’s Big Miss: The End Of The Enterprise Era. While this blog does not deal with security directly, it does highlight a couple of important trends that effect both what customers are buying, and who is making the decisions. Oracle’s miss suggests that the legacy vendors may struggle to adapt to the world of open-source software and Software as a Service (SaaS) and, in particular, the subscription revenue models that drive both. No. Oracle’s miss is not a failure to embrace open source, and it’s not a failure to embrace SaaS; it’s a failure they have not embraced and flat out owned PaaS. Oracle limiting itself to just software would be a failure. A Platform as a Service model would give them the capability of owning all of the data center, and still offering lower cost to customers. And they have the capability to address the compliance and governance issues that slow enterprise adoption of cloud services. That’s the opposite of the ‘cloud in a box’ model being sold. Service fees and burdensome cost structures are driving customers to look for cheaper alternatives. This is not news as Postgres and MySQL, before the dawn of Big Data, were already making significant market gains for test/dev/non-critical applications. It takes years for these manifestations to fully hit home, but I agree with Mr. Asay that this is what is happening. But it’s Big Data – and perhaps because Mr. Asay works for a Big Data firm he felt he could not come out an say it – that shows us commodity computing and virtually free analytics tools provide a very attractive alternative. One which does not require millions in up front investment. Don’t think the irony of this is lost on Google. I believe this so strongly that I divested myself all Oracle stock – a position I’d held for almost 20 years – because they are missing too many opportunities. But while I find all of that interesting as it mirrors the cloud and big data adoption trends I’ve been seeing, it’s a sideline to what I think is most interesting in the article. Redmonk analyst Stephen O’Grady argues: With the rise of open source…developers could for the first time assemble an infrastructure from the same pieces that industry titans like Google used to build their businesses – only at no cost, without seeking permission from anyone. For the first time, developers could route around traditional procurement with ease. With usage thus effectively decoupled from commercial licensing, patterns of technology adoption began to shift…. Open source is increasingly the default mode of software development….In new market categories, open source is the rule, proprietary software the exception. I’m seeing buying decisions coming from development with increasing regularity. In part it’s because developers are selecting agile and open source web technologies for application development. In part it’s that they have stopped relying upon relational concepts to support applications – to tie back to the Oracle issue. But more importantly it’s the way products and service fit within the framework of how they want them to work; both in the sense they have to meld with their application architecture, and because they don’t put up with sales cycle B.S. for enterprise products. They select what’s easy to get access to. Freemium models or cloud services, that you can sample for a few weeks just by supplying a credit card. No sales droid hassles, no contracts to send to legal, no waiting for ‘purchasing cycles. This is not an open-source vs. commercial argument, it’s an ease of use/integration/availability argument. What developers want right now vs. lots of stuff they don’t want with lots of cost and hassles: When you’re trying to ship code, which do you choose? As it pertains to security, development teams play an increasing role in product selection. Development has become the catalyst when deciding between source code analysis tools and DAST. They choose REST-ful APIs over SOAP, which completely alters the application security model. And on more than a few occasions I’ve seen WAF relegated to being a ‘compliance box’ simply because it could not be effective and efficiently integrated into the development-operations (dev-ops) process. Traditionally there has been very little overlap between security, identity and development cultures. But those boundaries thaw when a simple API set can link cloud and on-prem systems, manage clients and employees, accommodate mobile and desktop. Look at how many key management systems are fully based upon identity, and how identity and security meld on mobile platforms. Open source may increasingly be the default model for adoption, but not because it’s lacks licensing issues; it’s because of ease of availability (less hassles) and architectural synergy more than straight cost. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.