Securosis

Research

A Kick-Ass Cloud Database Security Automation Example

Yesterday I was in Vegas to participate in a panel at IBM’s Information on Demand Conference. To my amusement and frustration, I was already in Vegas that weekend, drove 4.5 hours home to Phoenix on Sunday, then flew back Monday evening (4 hours door to door). The panel was on database security in the cloud, and at one point I came up with an example to show how this sh*t is seriously different than how we do security today. The example below would be nearly impossible in a non-cloud environment. It’s fictional, but there are no technical obstacles to implementing it right now. There is, however, one limitation I will mention at the end. Imagine a world where you have a robust internal cloud to support business units in a large enterprise. This is in contrast to current environments where, if a business unit wants an application or database resource: they submit a request, things are approved (maybe), then physical or virtual assets are acquired, configured, and assigned. You are one of those forward-thinking orgs which stood up your private cloud with a self-service portal where approved managers can dynamically provision a pre-established set of resources. No, this probably isn’t how most of you use the cloud today, but it will be. Now imagine that some of these resource stacks include databases. You are, obviously, concerned with the security and compliance of these databases. This is the sort of thing that used to constantly bite you in the ass, as teams ranging from developers to sub-departments installed their own stuff, loaded sensitive data, and then failed to secure it. But you now sleep soundly at night because… When the user requests the application stack, all operating systems and software are automatically patched to current levels using mandatory installation scripts. The installation scripts also configure the resources to a secure-by-default state, doing things like inserting user credentials, locking down ports, setting appropriate file permissions, configuring application defaults, and so on. You can even automate service account management and cross-link them between application components (heck, we do this in the CCSK Plus training class). All application components instantiate themselves in different, locked-down network security groups. Only required internal ports are open. This can be much more granular and restrictive than current application stacks which require physical hardware to protect. When the database spins up it registers itself with your Database Activity Monitoring (DAM) and assessment tools via their APIs. The DAM tool performs an initial database vulnerability assessment and registers the database for future scans. (Other stack components do similar things, but we’re focusing on the database for this example). Thanks to those cloud APIs, it knows where to look for the database and who created it, and the necessary firewall ports are opened. After the initial DAM scan is complete and passed, the DAM tool makes an API call to the cloud’s network controller to open up any additional ports needed for internal access. Depending on the script, this may be restricted to subnets, individual IPs, and so on. Similar processes are followed for the application and web server components and their various security tools (vulnerability assessment, asset registration, configuration management, etc.). Assuming everything is hunky dory, any last required ports to access the application can be opened up. The user won’t pick this – it will be handled automatically via API and policy scripts. The DAM tool will have installed its monitoring agent at initial launch. The agent connects back to the DAM server and activity is now monitored (including administrative SQL queries). On a specified schedule, the database is scanned for ongoing configuration compliance and vulnerabilities. It is also scanned for sensitive data, using the content discovery feature of your DAM tool and policies tied to the type of application stack deployed and the business unit assigned. If it isn’t supposed to have credit card numbers, but they start appearing, security gets an alert. Think about this for a moment – today people try to spin stuff up all over the place and it’s nearly impossible to find, never mind configure securely. In the example above we completely automate the configuration and security of the application stack (including the database) on a dynamic basis using APIs and policy scripts. The database spins up with secure settings in a secure network; it is centrally registered, actively monitored, and scanned for both problems and sensitive (read ‘regulated’) data on an ongoing basis. Today’s limitation is that very few security tools, by default, support the automation I described above. But things like initialization scripts and dynamic network management via APIs are fundamental to all cloud platforms. Cool, eh? And heck, I’m probably missing a bunch of things Share:

Share:
Read Post

Applied Network Security Analysis: Collection and Analysis = A Fighting Chance

In the introduction to our Applied Network Security Analysis series, we talked about monitoring everything and the limitations of a log-centric data collection approach, in our battle to improve security operational processes. Now let’s dig in a little deeper and understand what kind of data collection foundation makes sense, given the types of analysis we need to deal with our adversaries. Let’s define the critical data types for our analysis. First are the foundational elements, which were covered ad nauseum in our Monitoring Up the Stack paper. These include event logs from the network, security, databases, and applications. We have already pointed out that log data is not enough, but you still need it. The logs provide a historical view of what happened, as well as the basis for the rule base needed for actionable alerts. Next we’ll want to add additional data commonly used by SIEM devices – that includes network flows, configuration data, and some identity information. These additional data types provide increased context to detect patterns of potential badness. But this is not enough – we need to look beyond these data types for more detail. Full Packet Capture As we wrote in the React Faster and Better paper: One emerging advanced monitoring capability – the most interesting to us – is full packet capture. These devices basically capture all traffic on a given network segment. Why? The only way you can really piece together exactly what happened is to use the actual traffic. In a forensic investigation this is absolutely crucial, providing detail you cannot get from log records. Going back to a concept we call the Data Breach Triangle, you need three components for a real breach: an attack vector, something to steal, and a way to exfiltrate it. It’s impossible to stop all potential attacks, and you can’t simply delete all your data, so we advocate heavy perimeter egress filtering and monitoring, to (hopefully) prevent valuable data from escaping your network. So why is having the packet stream so important? It is a critical facet of heavy perimeter monitoring. The full stream can provide a smoking gun for an actual breach, showing whether data actually left the organization, and which data. If you look at ingress traffic, the network capture enables you to pinpoint the specific attack vector(s) as well. We will discuss both these use cases, and more, in additional detail later in this series, but for now it’s enough to say that full network packet capture data is the cornerstone of Applied Network Security Analysis. Intelligence and Context Two additional data sources bear mentioning: reputation and malware. Both these data types provide extra context to understand what is happening on your networks and are invaluable for refining alerts. Reputation: Wouldn’t it be great if you knew some devices and/or destinations were up to no good? If you could infer some intent from just an IP address or other identifying characteristics? Well you can, at least a bit. By leveraging some of the services that aggregate data on command and control networks, and on other known bad actors, you can refine your alerts and optimize your packet capture based on behavior, not just on luck. Reputation made a huge difference in both email and web security, and we expect a similar impact on more general network security. This data helps focus monitoring and investigation on areas likely to cause problems. Malware samples: A log file won’t tell you that a packet carried a payload with known malware. But samples of known malware are invaluable when scrutinizing traffic as it enters the network, before it has a chance to do any damage. Of course nothing is foolproof, but we are trying to get smarter and optimize our efforts. Recognizing something that looks bad as it enters the network would provide a substantial jump for blocking malware. Especially compared to other folks, whose game is all about cleaning up the messes after they fail to block it. We will dive into how to leverage these data types by walking through the actual use cases where this data pays dividends later in the series. But for now our point is that more data is better than less, and without building a foundation of data collection analysis is likely futile. Digesting Massive Amounts of Data The challenge of collecting and analyzing a multi-gigabit network stream is significant, and each vendor is likely to have its own special sauce to collect, index, and analyze the data stream in real time. We won’t get into specific technologies or approaches – after all, beauty is in the eye of the beholder – but there are a couple things to look for: Collection Integrity: A network packet capture system that drops packets isn’t very useful, so the first and foremost requirement is the ability to collect network traffic at your speeds. Given that you are looking to use this data for investigation, it is also important to maintain traffic integrity to prove packets weren’t dropped. Purpose-built data store: Unfortunately MySQL won’t get it done as a data store. The rate of insertions required to deal with 10gbps traffic demand something built specifically that purpose. Again, there will be lots of puffery about this data store or that one. Your objective is simply to ensure the platform or product you choose will scale to your needs. High-speed indexing: Once you get the data into the store you need to make sense of it. This is where indexing and deriving metadata become critical. Remember this has to happen at wire speeds, is likely to involve identifying applications (like an application-aware firewall or IDS/IPS), and enriching the data with geolocation and/or identity information. Scalable storage: Capturing high-speed network traffic demands a lot of storage. And we mean a lot. So you need to calibrate onboard storage against archiving approaches, optimizing the amount of storage on the capture devices based on the number of days of traffic to keep. Keep in mind that the metadata

Share:
Read Post

Incite 10/26/2011: The Curious Case of Flat Stanley

Flat Stanley has it pretty good. If you have elementary school age kids, you probably know all about him. Flat Stanley is a cute story about a kid who gets flattened, and then spends most of the book trying to regain his natural form. Many teachers have kids do a Flat Stanley project, where they color a picture and send it to a friend or relative. The recipient then takes pictures of Flat Stanley doing something from their daily routine and writes a letter to send back with the photo. The kids learn a bit about someone else, and they have to read the letter. Win/win. Last week, XX2 gave me her Flat Stanley to take on a trip. I started at SecTor CA up in Toronto, so Flat Stanley got to take a picture by the CN Tower. While I’m on this topic, I need to shout out for the folks behind SecTor CA. It’s a great conference, with great speakers and a great community. If you are in or around Toronto, you need to get to SecTor CA. They even invited Stanley to get up on stage and talk about his curious life (picture below). The audience was enthralled. Evidently Stanley doesn’t make too many high-profile keynote speeches, so XX2’s teacher showed the class the picture. It was a big hit. Turns out the wonderful Arlen clan also has lots of experience with Flat Stanley. So we traded stories of what they did with Flat Stanley. They even heard tales of Flat Stanley going to London and attending the Royal Wedding. That dude gets around. Then I took Flat Stanley on my annual golf trip with the boys. Why not? That keynote speech business is hard work, and Stanley needed a bit of R&R. I’m pretty sure I should have had Stanley hit a few drives for me since – he couldn’t have done worse. Let’s just say I should stick to writing and pontificating. I did get some good photos of Stanley in the golf cart, and putting in a birdie. Stanley is a child, so I put him to bed before the evening festivities. And that’s all I’ll say about that. But all told, Flat Stanley has a pretty good gig. He travels around the world and experiences interesting stuff. Which, when I come to think about it, is kind of what I do. And I’m not flat either. That would be a win for me. -Mike Photo credits: Mike Rothman on his rockin’ iPhone 4S Incite 4 U Getting Binary on Risk Assessment: If there is one thing I can say with a high level of confidence, it’s that math guys will defend math. Alex Hutton doesn’t disappoint, as he critiques Ben Sapiro’s Binary Risk Assessment thought balloon (presented at SecTor CA). Alex is balanced but objects to calling Ben’s approach risk assessment, instead he calls it a way to assess vulnerability severity. Vernacular and semantics – the tools of lawyers and, seemingly, math guys. What I like about Ben’s approach is that it’s simple and quick. Most real risk assessment methods are neither. And given the need to prioritize actions in real time, it’s better to be quick than right to 5 decimal places. So I like Ben’s approach – read it and use it. That doesn’t mean you shouldn’t still push toward true risk quantification (if you have that kind of threshold for pain), but understand that there is a time and place for each approach. – MR NoSQL on NoCloud: I am not surprised that Oracle launched a NoSQL database at OpenWorld. NoSQL threatens the relational DB status quo with cheaper, more agile capabilities, with greater data capacity. What does surprise me is their release of NoSQL on a big-ass big data appliance. So new, yet so old school. This is especially interesting in light of the news that Oracle’s acquiring RightNow while talkin’ smack about how Salesforce.com is the roach motel of cloud. I think some of this puffery is because Oracle was late to adopt the cloud, much as Microsoft was with the Internet, but they are certainly making a concerted cloudy push now. Regardless, the big appliance deployment could really work. It’s anti-cloud, but wears like a comfortable old jacket. And it’s so self-contained that it’s generic storage, like a SAN, and you’ll likely be able to outsource security and maintenance and just worry about pushing data. I think this will be very popular for small enterprises who just need to get work done without worrying too much about new technologies. – AL Security small guy syndrome: I think I have ranted about this one before, but one of my pet peeves is people in security talking about how “We have to educate the users/developers/business/whatever.” Because, more often than not, when they say ‘educate’ they really mean ‘indoctrinate’. To me it always sounds like small guy syndrome – you know, the kid who has all the answers if the stupid world would just listen! Chris Eng pokes at a recent presentation that sounds like it falls into this category. It isn’t that security shouldn’t talk to development or try to work with them, but we will never succeed if we don’t understand their priorities in the context of our own bias. Even then their priorities will never completely align with ours because we have different jobs. So my advice is try to work with developers, but don’t expect to change them – instead assume you will be adding whatever else you need to improve the end product (secure code, right?). – RM Cyber-insurance: Win or Futility? We are starting to see better analyses of whether cyber-insurance makes sense. I have been pretty negative because it wasn’t clear to me that the underwriting was based on any real loss data – which means the environment has been rife Ouija board pricing. There is a good primer on NetworkWorld explaining how to maybe use cyber-insurance effectively, and I have seen a

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.