Securosis

Research

Random Thoughts on Securing Applications in the Cloud

How do you secure data in the cloud? The answer is “it depends”. What type of cloud are you talking about – IaaS, PaaS, or SaaS? Public or Private? What services or applications are you running? What data do you want to protect? Following up on the things I learned at RSA, one statement I heard makes sense now. Specifically, a couple weeks ago Chris Hoff surprised me when, talking about data security in the cloud, he tweeted: Really people need to be thinking more about app-level encryption. Statements like that normally make the information-centric security proponent in me smile with glee. But this time I did not get his point. Lots of different models of the cloud, and lots of ways to protect data, so why the emphatic statement? He answered the question during the Cloudiquantanomidatumcon presentation. Chris asked “How do you secure data in two virtual machines running in the cloud?” The standard answer: PKI and SSL. Data at rest and data in motion are covered. With that model in your head, it does not look too complex. But during the presentation, especially in an IaaS context, you begin to realize that this is a problem as you scale to many virtual machines with many users and dispersed infrastructure bits and pieces. As you start to multiply virtual machines and add users, you not only create a management problem, but also lose the context of which users should be able to access the data. Encryption at the app layer keeps data secure both at rest and in motion, should reduce the key management burden, and helps address data usage security. App layer encryption has just about the same level of complexity at two VMs; but its complexity scales up much more gradually as you expand the application across multiple servers, databases, storage devices, and whatnot. So Chris convinced me that application encryption is the way to scale, and this aligns with the research paper Rich and I produced on Database Encryption, but for slightly different reasons. I can’t possibly cover all the nuances of this discusion in a short post, and this is big picture stuff. And honestly it’s a model that theoretically makes a lot of sense, but then again so does DRM, and production deployments of that technology are rare as hen’s teeth. Hopefully this will make sense before you find yourself virtually knee deep in servers. Share:

Share:
Read Post

React Faster and Better: Index

With yesterday’s post, we have reached the end of the React Faster and Better series on advanced Incident Response. This series focuses a bit more on the tools and tactics than Incident Response Fundamentals. For some of you, this will be the first time you are seeing some of these posts. No, we aren’t cheating you. But we have moved our blog series to our Heavy Feed (http://securosis.com/blog/full) to keep the main feed focused on news and commentary. Over the next week or so, we’ll be turning the series into some white paper goodness, so stay tuned for that. Introduction Incident Response Gaps New Data for New Attacks Alerts & Triggers Initial Incident Data Organizing for Response Kicking off a Response Contain and Respond Respond, Investigate, and Recover Piecing It Together Check it out. Share:

Share:
Read Post

FireStarter: Risk Metrics Are Crap

I recently got into a debate with someone about cyber-insurance. I know some companies are buying insurance to protect against a breach, or to contain risk, or for some other reason. In reality, these folks are flushing money down the toilet. Why? Because the insurance companies are charging too much. We’ve already had some brave soul admit that the insurers have no idea how to price these policies because they have no data and as such they are making up the numbers. And I assure you, they are not going to put themselves at risk, so they are erring on the side of charging too much. Which means buyers of these policies are flushing money down the loo. Of course, cyber-insurance is just one example of trying to quantify risk. And taking the chance that the ALE heads and my FAIR-weather friends will jump on my ass, let me bait the trolls and see what happens. I still hold that risk metrics are crap. Plenty of folks make up analyses in attempts to quantify something we really can’t. Risk means something different to everyone – even within your organization. I know FAIR attempts to standardize vernacular and get everyone on the same page (which is critical), but I am still missing the value of actually building the models and making up plugging numbers in. I’m pretty sure modeling risk has failed miserably over time. Yet lots of folks continue to do so with catastrophic results. They think generating a number makes them right. It doesn’t. If you don’t believe me, I have a tranche of sub-prime mortgages to sell you. There may be examples of risk quantification wins in security, but it’s hard to find them. Jack is right: The cost of non-compliance is zero* (*unless something goes wrong). I just snicker at the futility of trying to estimate the chance of something going wrong. And if a bean counter has ever torn apart your fancy spreadsheet estimating such risk, you know exactly what I’m talking about. That said, I do think it’s very important to assess risk, as opposed to trying to quantify it. No, I’m not talking out of both sides of my mouth. We need to be able to categorize every decision into a number of risk buckets that can be used to compare the relative risk of any decision we make against other choices we could make. For example, we should be able to evaluate the risk of firing our trusted admin (probably pretty risky, unless your de-provisioning processes kick ass) versus not upgrading your perimeter with a fancy application aware box (not as risky because you already block Facebook and do network layer DLP). But you don’t need to be able to say the risk of firing the admin is 92, and the risk of not upgrading the perimeter is 25. Those numbers are crap and smell as bad as the vendors who try to tie their security products to a specific ROI. BTW, I’m not taking a dump on all quantification. I have always been a big fan of security (as opposed to risk) metrics. From an operational standpoint, we need to measure our activity and work to improve it. I have been an outspoken proponent of benchmarking, which requires sharing data (h/t to New School), and I expect to be kicking off a research project to dig into security benchmarking within the next few weeks. And we can always default to Shrdlu’s next-generation security metrics, which are awesome. But I think spending a lot of time trying to quantify risk continues to be a waste. I know you all make decisions every day because Symantec thinks today’s CyberCrime Index is 64 and that’s down 6%. Huh? WTF? I mean, that’s just making sh*t up. So fire away, risk quantifiers. Why am I wrong? What am I missing? How have you achieved success quantifying risk? Or am I just picking on the short bus this morning? Photo credits: “Smoking pile of sh*t – cropped” originally uploaded by David T Jones Share:

Share:
Read Post

React Faster and Better: Piecing It Together

We have been through all the pieces of our advanced incident response method, React Faster and Better, so it is time to wrap up this series. The best way to do that is to actually run through a sample incident with some commentary to provide the context you need to apply the method to something tangible. It’s a bit like watching a movie while listening to the director’s commentary. But those guys are actually talented. For brevity we will use an extremely simple high-level example of how the three response tiers evaluate, escalate, and manage incidents: The alert It’s Wednesday morning and the network analyst has already handled a dozen or so network/IDS/SIEM alerts. Most indicate probing from standard network script-kiddie tools and are quickly blocked and closed (often automatically). He handles those himself, just another day in the office. The network monitoring tool pings an alert for an outbound request on a high port to an IP range located in a country known for intellectual property theft. The analyst needs to validate the origin of the packet, so he looks and sees the source IP is in Engineering. Ruh-roh. The tier 1 analyst passes the information along to a tier 2 responder. Important intellectual property may be involved and he suspects malicious activity, so he also phones the on-call handler to confirm the potential seriousness of the incident. Tier 2 takes over, and the tier 1 analyst goes back to his normal duties. This is the first indication that something may be funky. Probing is nothing new and tier 1 needs to handle that kind of activity itself. But the outbound request very well may indicate an exfiltration attempt. And tracing it back to a device that does have access to sensitive data means it’s definitely something to investigate more closely. This kind of situation is why we believe egress monitoring and filtering are so important. Monitoring is generally the only way you can tell if data is actually leaking. At this point the tier 1 analyst should know he is in deep water. He has confirmed the issue and pinpointed the device in question. Now it’s time to hand it off to tier 2. Note that the tier 1 analyst follows up with a phone call to ensure the hand-off happens and that there is no confusion. How bad is bad? The tier 2 analyst opens an investigation and begins a full analysis of network communications from the system in question. The system is no longer actively leaking data, but she blocks any traffic to that destination on the perimeter firewall by submitting a high priority request to the firewall management team. After that change is made, she verifies that traffic is in fact being blocked. She sets an alert for any other network traffic from that system and calls or visits the user, who predictably denies knowing anything about it. She also learns that system normally doesn’t have access to sensitive intellectual property, which may indicate privilege escalation – another bad sign. Endpoint protection platform (EPP) logs for that system don’t indicate any known malware. She notifies her tier 3 manager of the incident and begins a deeper investigation of previous network traffic from the network forensics data. She also starts looking into system logs to begin isolating the root cause. Once the responder notices outbound requests to a similar destination from other systems on the same subnet, she informs incident response leadership that they may be experiencing a serious compromise. Then she finds that the system in question connected to a sensitive file server it normally doesn’t access, and transferred/copied some entire directories. It’s going to be a long night. As we have been discussing, tier 2 tends to focus on network forensics because it’s usually the quickest way to pinpoint attack proliferation and severity. The first step is to contain the issue, which entails blocking traffic to the external IP – this should temporarily eliminate any data leakage. Remember, you might not actually know the extent of the compromise, but that shouldn’t stop you from taking decisive action to contain the damage as quickly as possible. At this point, tier 3 is notified – not necessarily to take action, but so they are aware there might be a more serious issue. It’s this kind of proactive communication that streamlines escalation between response tiers. Next, the tier 2 analyst needs to determine how much the issue has spread within the environment. So she searches through the logs and finds a similar source, which is not good. That means more than one device is compromised and it could represent a major breach. Worst yet, she sees that at least one of the involved systems purposely connected to a sensitive file store and removed a big chunk of content. So it’s time to escalate and fully engage tier 3. Not that it hasn’t been fun thus far, but now the fun really begins. Bring in the big guns Tier 3 steps in and begins in-depth analysis of the involved endpoints and associated network activity. They identify the involvement of custom malware that initially infected a user’s system via drive-by download after clicking a phishing link. No wonder the user didn’t know anything – they didn’t have a chance against this kind of attack. An endpoint forensics analyst then discovers what appears to be the remains of an encrypted RAR file on one of the affected systems. The network analysis shows no evidence the file was transferred out. It seems they dodged a bullet and detected the command and control traffic before the data exfiltration took place. The decision is made to allow what appears to be encrypted command and control traffic over a non-standard port, while blocking all outbound file transfers (except those known to be part of normal business process). Yes, they run the risk of blocking something legit, but senior management is now involved and has decided this is a worthwhile risk, given the breach in progress. To limit potential data loss through the C&C channels left open, they

Share:
Read Post

Friday Summary: February 25, 2011

In the relatively short period of time I have been on this planet, there are three time periods that really stand out to me as watershed moments in computing technology. The first was the dawn of the personal computing era that conveniently overlapped with the golden age of video arcades. For me it started the day my elementary school teacher introduced us to a Commodore PET, through the first Mac, and tapered off in the late 80s when home computers stopped being an anomaly. I don’t think the excitement I felt was merely the result of being an enthusiastic young male. ASCII porn didn’t really cut it, even for a 14 year old geek. Next was the dot-com era: around the time I should have graduated college if I hadn’t dragged out my undergrad a solid 8 years. In my memories it started when I signed up with my first dial-up ISP and played with Gopher and newsgroups – through the emergence of Mosaic, Netscape, and my first web sites (ugly) – and faded with the dot-com crash and crappy TV studio websites (which still, mostly, suck). Personally I went from paramedic, to PC tech, to sysadmin, to network admin, to developer in these short years. (Fast learner, I guess). The third era? Right now. It started with the dual emergences of the iPhone and Amazon Web Services, and it’s years away from ending. For me the bellwether moments were my first Intel-based MacBook Pro running Parallels (I converted the official Gartner image into a VM to run it there), followed by the iPhone, with a little Dropbox mixed in. The overlapping models of mobility and cloud computing are creating one of the most exciting times to be in technology I can remember. With lower barriers to entry in terms of costs and hardware, and near-ubiquitous accessibility (even accounting for AT&T wireless), I’m more psyched today than even when I built my first little company to make doinky web apps and do a little security consulting. I seriously wish I was out there doing startups, but it’s not quite the time for a career change. When I can spin up 5 different servers, on 5 different operating systems, in 5 minutes for under $5? From my iPad? That kicks so much more ass than making a crappy embossed background for my old ‘professional’ looking site. As for security? Oh my god, is this a freaking awesome time to do what we do. The threats matter, the assets are important, and the opportunities are nearly endless. I realize a lot of people are depressed about the whole industry game and compliance cycle, but that’s a small penalty to pay for the excitement and meaning of our work. You don’t get a seat at the table unless the stakes are high. Life is good. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Video of Rich on MSNBC. He apologizes for the eyebrow thing. Mort cited talking about cloud security at Bsides. Rich quoted at SearchSecurity on cloud. RSA Podcast on Agile development, Security Fail. Protegrity calls Securosis one of their favorite blogs. Data is Safe – Until It’s Not. Apparently Adrian telling the retail sector they suck at security has legs. And fortunately for us WhiteHat Security published data to back up his claim. Clearing The Air On DAM. Adrian’s Dark Reading post. Favorite Securosis Posts Rich: FireStarter: The New Cold War. There seems to be lots of naivete out there. Guess what – they hack us, we hire people to hack them. The world goes on. Mike Rothman & Adrian Lane: What You Really Need to Know about Oracle Database Firewall. Rich calls out marketing buffoonery. FTW. Other Securosis Posts React Faster and Better: Respond, Investigate, and Recover. Could This Be WikiLeaks for the Criminal Computer Underground?. What I Learned at RSAC. Incite 2/23/2011: Giving up. RSA: the Only Difference Between a Rut and a Grave Is the Depth. RSA: We Now Go Live to Our Reporters on the Scene. How to Encrypt Block Storage in the Cloud with SecureCloud. RSA 2011: A Few Pointers. The Securosis Guide to RSA 2011: The Full Monty. Favorite Outside Posts Rich: Gunnar follows the Heartland cash. I haven’t seen anyone else track the financials of a company involved in a major breach so closely. Before we start talking “dollars per record lost”, we need more of this kind of work. Mike Rothman: The obsession with next. Given that next is all we saw at RSA, this was a timely post on the 37Signals blog. Adrian Lane: Russian Cops Crash Pill Pusher Party. Oddly no arrests have been reported, but a great story. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts Zeus malware integrating SMS for hacking out of band authentication. More on HBGary Hack. Lion Watch. With new FileVault. When to implement that is an open question. SSDs resistant to erasure. Updated SAFEcode Development Practices. Oracle Releases Database Firewall. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Shrdlu, in response to What I Learned at RSAC. Nice piece, Adrian–and it was good to meet you too. The general sentiment I heard from vendors I talked to was that the overall mood was better at RSA this year and there were more end-users (as opposed to vendors and partners selling to one another). I can’t form an opinion, as this was my first RSA, but I’ve been to a lot of other conferences and I really didn’t see much difference between this one and other “commercial” ones. That being said, I did see some interesting stuff going on, and I think it’s our job to seek it out and

Share:
Read Post

Could This Be WikiLeaks for the Criminal Computer Underground?

When Brian Krebs sent me a link to his latest article on illegal pharmacy networks my only response was: Holy friggin’ awesomesauce!!! Brian got his hands on 9GB of financial records for what is likely the world’s biggest online spammer/illegal pharmacy network: In total, these promoters would help Glavmed sell in excess of 1.5 million orders from more than 800,000 consumers who purchased knockoff prescription drugs between May 2007 and June 2010. All told, Glavmed generated revenues of at least $150 million. Brian told me this is merely the first of a lengthy series he is putting together as he digs through the data and performs additional research. This is true investigative reporting, folks. Here’s why I think this could be a watershed moment in computer crime. While this may only be the books for a big criminal pharmacy, it shows all the linkages to other corners of the global criminal networks. Spammers, black hat hackers, SEO, money launderers… it’s probably in there. Especially once Brian correlates with his other sources. He did answer one little question I’ve always had… do they actual send people the little blue pills? Yep. And Brian has the shipping records to prove it. Share:

Share:
Read Post

React Faster and Better: Respond, Investigate, and Recover

After you have validated and filtered the initial alert, then escalated to contain and respond to the incident, you may need to escalate for further specialized response, investigation, and (hopefully) recovery. This progression to the next layer of escalation varies more among organizations we have talked with than the others – due to large differences in available resources, skill sets, and organizational priorities, but as with the rest of this series the essential roles are fairly consistent. Tier 3: Respond, Investigate, and Recover Tier 3 is where incident response management, specialized resources, and the heavy hitters reside. In some cases escalation may be little more than a notification that something is going on. In others it might be a request for a specialist such as a malware analyst for endpoint forensics analysis. This is also the level where most in-depth investigation is likely to occur – including root cause analysis and management of recovery operations. Finally, this level might include all-hands-on-deck response for a massive incident with material loss potential. Despite the variation in when Tier 3 begins, the following structure aligns at a high level with the common processes we see: Escalate response: Some incidents, while not requiring the involvement of higher management, may need specialized resources that aren’t normally involved in a Tier 2 response. For example, if an employee is suspected of leaking data you may need a forensic examiner to look at their laptop. Other incidents require the direct involvement of incident response management and top-tier response professionals. We have listed this as a single step, but it is really a self-contained response cycle of constantly evaluating needs and pulling in the right people – all the way up to executive management if necessary. Investigate: You always investigate to some degree during an incident, but depending on its nature there may be far more investigation after initial containment and remediation. As with most steps in Tier 3, the lines aren’t necessarily black and white. For certain kinds of incidents – particularly advanced attacks – the investigation and response (and even containment) are carried out in lockstep. For example, if you detect customized malware, you will need to perform a concurrent malware analysis, system forensic analysis, and network forensic analysis. Determine root cause: Before you can close an incident you need to know why it happened and how to prevent it from happening again. Was it a business process failure? Human error? Technical flaw? You don’t always need this level of detail to remediate and get operations back up and running on a temporary basis, but you do need it to fully recover – and more importantly to ensure it doesn’t happen again. At least not using the same attack vector. Recover: Remediation gets you back up and running in the short term, but in recovery you finish closing the holes and restore normal operations. The bulk of recovery operations are typically handled by non-security IT operations teams, but at least partially under the direction of the security team. Permanent fixes are applied, permanent holes closed, and any restored data examined to ensure you aren’t re-introducing the very problems that allowed the incident in the first place. (Optional) Prosecute or Discipline: Depending on the nature of the incident you may need to involve law enforcement and carry a case through to prosecution, or at least discipline/fire an employee. Since nothing involving lawyers except billing ever moves quickly, this can extend many years beyond the official end of an incident. Tier 3 is where the buck stops. There are no other internal resources to help if an incident exceeds capabilities. In that case, outside contractors/specialists need to be brought in, who are then (effectively) added to your Tier 3 resources. The Team We described Tier 1 as dispatchers, and Tier 2 as firefighters. Sticking with that analogy, Tier 3 is composed of chiefs, arson investigators, and rescue specialists. These are the folks with the strongest skills and most training in your response organization. Primary responsibilities: Ultimate incident management. Tier 3 handles incidents that require senior incident management and/or specialized skills. These senior individuals manage incidents, use their extensive skills for complex analysis and investigation, and coordinate multiple business units and teams. They also coordinate, train, and manage lower level resources. Incidents they manage: Anything that Tier 2 can’t handle. These are typically large or complex incidents, or more-constrained incidents that might involve material losses or extensive investigation. A good rule of thumb is that if you need to inform senior or executive management, or involve law enforcement and/or human resources, it’s likely a Tier 3 incident. This tier also includes specialists such as forensics investigators, malware analysts, and those who focus on a specific domain as opposed to general incident response. When they escalate: If the incident exceeds the combined response capabilities of the organization. In other words, if you need outside help, or if something is so bad (e.g., a major public breach) that executive management becomes directly involved. The Tools These responders and managers have a combination of broad and deep skills. They manage large incidents with multiple factors and perform the deep investigations to support full recovery and root cause analysis. They tend to use a wide variety of specialized tools, including those they write themselves. It’s impossible to list all the options out, but here are the main categories: Network (full packet capture) forensics: You’ve probably noticed this category appearing at all the levels. While the focus in the other response tiers is more on alerting and visualization, at this level you are more likely to dig deep into the packets to fully understand what’s going on for both immediate response and later investigation. If you don’t capture it you can’t analyze it, and full packet capture is essential for the advanced incident response which provides the focus here. Once data is gone you can’t get it back – thus our incessant focus on capturing as much as you can, when you can. Endpoint

Share:
Read Post

What I Learned at RSAC

I was surprised at the negative tweets and blog posts after the RSA show this year, many by the security professionals at the core of this industry. I have been to RSA most years since 1997. This year, discontent and snarkiness seemed to be running high. “There is nothing new.” “There is no innovation.” “The vendors are all lying.” “These products don’t work as advertised.” “I have seen this presentation before.” “That attack won’t work in ‘the real world’.” I saw nobody excited about the concept of winning a car – what’s up with that!?! You know it’s bad when attendees complain about booth babes – booth babes! – and then go to the Barracuda party. You know who you are. This year, like most years, I learned a lot. I got a great introduction to mobile OS security fron Zach Lanier (Quine) over dinner. I learned a lot about Amazon EC2 and related seurity issues. I learned that a vendor may have lied to me about their key manager. Jeremiah Grossman’s presentation got me thinking about how I can improve my Agile SDL presentation. I learned that CIOs and CISOs are still struggling with the same challenges I did 10 years ago; and falling victim to the same role, organizational, and communication pitfalls. Chris Hoff answered a question on why app level encryption will probably scale better when protecting data in VMs. Talking to attendees, I learned there are a couple technologies that are still giant mysteries to average IT professionals. I learned that far fewer developers have worked within an Agile process than I expected. And by watching security and non-security people, I am still learning what makes a good analyst. Beyond what I learned, there is the whole personal side of it: meeting friends and getting some of the inside stories about security breaches and vendors. I got to meet, face to face, a couple of the people I criticized here, and was relieved that they appreciated my comments and did not take them personally. I got to meet people I admire and respect, including Michael Howard of Microsoft and Ivan Ristic of Qualys. I got to talk Rugged software with a very diverse group of people. But perhaps the biggest single event, and the one I have the most fun at every year for the last four, is the Security Bloggers Awards – where else in the world am I going to attend a professional gathering and see 50 friends in the same room at the same time? I recognize that only about 35% of this is due to sessions and RSA sanctioned events; but all the other training sessions, parties, and people would not be in San Francisco at one time if it was not for the conference. The sheer gravity of the RSA Conference pulls all these people and events together. If you’re not getting something out of the conference, if you are burned out and not learning, look in the mirror. Not every year can you be hit on the head with a career-altering revelation, but there are too many smart people in attendance for you not to come away with lots of new ideas and reshaped perceptions. I am overjoyed that I can still get excited about this profession after 15 years, because there is always something new to learn. Share:

Share:
Read Post

What You *Really* Need to Know about Oracle Database Firewall

Nothing amuses me more than some nice vendor-on-vendor smackdown action. Well, plenty of things amuse me more, especially Big Bang Theory and cats on YouTube, but the vendor thing is still moderately high on my list. So I quite enjoyed this Dark Reading article on the release of the Oracle Database Firewall. But perhaps a little outside perspective will help. Here are the important bits: As mentioned in the article, this is the first Secerno product release since their acquisition. Despite what Oracle calls it, this is a Database Activity Monitoring product at its core. Just one with more of a security focus than audit/compliance, and based on network monitoring (it lacks local activity monitoring, which is why it’s weaker for compliance). Many other DAM products can block, and Secerno can monitor. I always thought it was an interesting product. Most DAM products include network monitoring as an option. The real difference with Secerno is that they focused far more on the security side of the market, even though historically that segment is much smaller than the audit/monitoring/compliance side. So Oracle has more focus on blocking, and less on capturing and storing all activity. It is not a substitute for Database Activity Monitoring products, nor is it “better” as Oracle claims. Because it is a form of DAM, but – as mentioned by competitors in the article – you still need multiple local monitoring techniques to handle direct access. Network monitoring alone isn’t enough. I’m sure Oracle Services will be more than happy to connect Secerno and Oracle Audit Vault to do this for you. Secerno basically whitelists queries (automatically) and can block unexpected activity. This appears to be pretty effective for database attacks, although I haven’t talked to any pen testers who have gone up against it. (They do also blacklist, but the whitelist is the main secret sauce). Secerno had the F5 partnership before the Oracle acquisition. It allowed you to set WAF rules based on something detected in the database (e.g., block a signature or host IP). I’m not sure if they have expanded this post-acquisition. Imperva is the only other vendor that I know of to integrate DAM/WAF. Oracle generally believes that if you don’t use their products your are either a certified idiot or criminally negligent. Neither is true, and while this is a good product I still recommend you look at all the major competitors to see what fits you best. Ignore the marketing claims. Odds are your DBA will buy this when you aren’t looking, as part of some bundle deal. If you think you need DAM for security, compliance, or both… start an assessment process or talk to them before you get a call one day to start handling incidents. In other words: a good product with advantages and disadvantages, just like anything else. More security than compliance, but like many DAM tools it offers some of both. Ignore the hype, figure out your needs, and evaluate to figure out which tool fits best. You aren’t a bad person if you don’t buy Oracle, no matter what your sales rep tells your CIO. And seriously – watch out for the deal bundling. If you haven’t learned anything from us about database security by now, hopefully you at least realize that DBAs and security don’t always talk as much as they should (the same goes for Guardium/IBM). If you need to be involved in any database security, start talking to the DBAs now, before it’s too late. BTW, not to toot our own horns, but we sorta nailed it in our original take on the acquisition. Next we will see their WAF messaging. And we have some details of how Secerno works. Share:

Share:
Read Post

Incite 2/23/2011: Giving up

I’ve been in the security business a long time. I have enjoyed up cycles through the peaks, and back down the slope to the inevitable troughs. One of my observations getting back from RSAC 2011 is the level of sheer frustration on the part of many security professionals today. Frustration with management, frustration with users, frustration with vendors. Basically lots of folks are burnt out and mad at the world. Maybe it’s just the folks who show up at RSA, but I doubt it. This seems to be true across the industry. A rather blunt tweet from 0ph3lia sums up the way lots of you feel: Every day I’m filled with RAGE at this f***ing industry & the fact that I work in it. Maybe I’m just not cut out for the security industry. This is a manifestation of many things. Tight budgets for a few years. The ongoing skills gap. Idiotic users and management. Lying vendors. All contribute to real job dissatisfaction on broad scale. So do you just give up? Get a job at Starbucks or in a more general IT role? Leave the big company and go to a smaller one, or vice versa? Is the grass going to be greener somewhere else? Only you can answer that question. But many folks got into this business over the past 5 years because security offered assured employment. And they were right. There are tons of opportunities, but at a significant cost. I joke that security is Bizarro World, where a good day is when nothing happens. You are never thanked for stopping the attack, but instead vilified when some wingnut leaves their laptop in a coffee shop or clicks on some obvious phish. You don’t control much of anything, have limited empowerment, and are still expected to protect everything that needs to be protected. For many folks, going to work is like lying on a bed of nails for 10-12 hours a day. So basically to be successful in security you need an attitude adjustment. Shack had a good riff on this yesterday. You can’t own the behaviors of the schmucks who work for your company. Not and stay sane. Sure, you may be blamed when something bad happens, but you have to separate blame from responsibility. If you do your best, you should sleep well. If you can’t sleep or are grumpy because security gets no love and you get blame for user stupidity; or because you have to get a new job every 2-3 years; or for any of the million other reasons you may hate doing security; then it’s okay to give up. Your folks and/or your kids will still love you. Promise. I gave up being a marketing guy because I hated it. That’s right, I said it. I gave up. After my last marketing gig ended, I was done. Finito. No amount of money was worth coming home and snapping at my family because of a dickhead sales guy, failed lead generation campaign, or ethically suspect behavior from a competitor. My life is too short to do something I hate. So is yours. So do some soul searching. If security is no good for you, get out. Do something else. Change is good. Stagnation and anger are not. -Mike Photo credits: “happiness is a warm gun” originally uploaded by badjonni Domo Arigato My gratitude knows no bounds regarding winning the “Most Entertaining Security Blog” award at the Social Security Blogger Awards last week. Really. Truly. Honestly. I’ve got to thank the Boss because she’ll kick my ass if I don’t mention her first every time. Then I need to thank Rich and Adrian (and our extended contributor family) who put up with my nonsense every day. But most of all, I need to thank you. Every time you come up to me at a show and tell me you read my stuff (and actually like it), it means everything to me. I’m always telling you that I know how lucky I am. And it’s times like these, and getting awards like this, that make it real for me. So thanks again and I’ll only promise that I’ll keep writing as long as you keep reading. -Mike Incite 4 U Marketecture does not solve security problems: That was my tweet regarding Cisco’s new marketecture SecureX. The good news is that Cisco has nailed the issues – namely the proliferation of mobile devices and the requisite re-architecting of networks to address the onslaught of bandwidth-hogging video traffic. This will fundamentally alter how we provide ingress and egress, and that will require change in our network security architectures. But what we don’t need is more PowerPoints of products in the pipeline, due at some point in the future. And that’s not even adressing the likelihood of data tagging actually working at scale. If Cisco had delivered on any of their other grand marketecture schemes (all of which looked great on paper), I’d have a little more patience, but they haven’t. Maybe Gillis and Co. have taken some kind of execution pill and will get something done. But until then I wouldn’t be budgeting for much. Is there a SKU for a marketecture? Cisco will probably have it first. – MR You can’t secure a dead horse: Well, technically you can secure an actual deceased horse, but you know what I mean. Microsoft is getting ready to release Service Pack 1 for Windows 7, but nearly all organizations I talk with still rely on Windows XP to some degree. You know, the last operating system Microsoft produced before the Trustworthy Computing Initiative. The one that’s effectively impossible to secure. No matter what we do, we can’t possibly expect to secure something that was never built for our current threat environment. We’re hitting the point where the risks clearly outweigh the non-security related justifications. FWIW, my new favorite saying is: “If you are more worried about the security risks of cloud computing and iOS devices than using XP

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.