In the last two posts we covered the main preparation you need to get quick wins with your DLP deployment. First you need to put a basic enforcement process in place, then you need to integrate with your directory servers and major infrastructure. With these two bits out of the way, it’s time to roll up our sleeves, get to work, and start putting that shiny new appliance or server to use.
The differences between a long-term DLP deployment and our “Quick Wins” approach are goals and scope. With a traditional deployment we focus on comprehensive monitoring and protection of very specific data types. We know what we want to protect (at a granular level) and how we want to protect it, and we can focus on comprehensive policies with low false positives and a robust workflow. Every policy violation is reviewed to determine if it’s an incident that requires a response.
In the Quick Wins approach we are concerned less about incident management, and more about gaining a rapid understanding of how information is used within our organization. There are two flavors to this approach – one where we focus on a narrow data type, typically as an early step in a full enforcement process or to support a compliance need, and the other where we cast a wide net to help us understand general data usage to prioritize our efforts. Long-term deployments and Quick Wins are not mutually exclusive – each targets a different goal and both can run concurrently or sequentially, depending on your resources.
Remember: even though we aren’t talking about a full enforcement process, it is absolutely essential that your incident management workflow be ready to go when you encounter violations that demand immediate action!
Choose Your Flavor
The first step is to decide which of two general approaches to take:
- Single Type: In some organizations the primary driver behind the DLP deployment is protection of a single data type, often due to compliance requirements. This approach focuses only on that data type.
- Information Usage: This approach casts a wide net to help characterize how the organization uses information, and identify patterns of both legitimate use and abuse. This information is often very useful for prioritizing and informing additional data security efforts.
Choose Your Deployment Type
Depending on your DLP tool, it will be capable of monitoring and protecting information on the network, on endpoints, or in storage repositories – or some combination of these. This gives us three pure deployment options and four possible combinations.
- Network Focused: Deploying DLP on the network in monitoring mode provides the broadest coverage with the least effort. Network monitoring is typically the fastest to get up and running due to lighter integration requirements. You can often plug in a server or appliance over a few hours or less, and instantly start evaluating results.
- Endpoint Focused: Starting with endpoints should give you a good idea of which employees are storing data locally or transferring it to portable storage. Some endpoint tools can also monitor network activity on the endpoint, but these capabilities vary widely. In terms of Quick Wins, endpoint deployments are generally focused on analyzing stored content on the endpoints.
- Storage Focused: Content discovery is the analysis of data at rest in storage repositories. Since it often requires considerable integration (at minimum, knowing the username and password to access a file share), these deployments, like endpoints, involve more effort. That said, it’s scan major repositories is very useful, and in some organizations it’s as important (or even more so) to understand stored data than to monitor information moving across the network.
Network deployments typically provide the most immediate information with the lowest effort, but depending on what tools you have available and your organization’s priorities, it may make sense to start with endpoints or storage. Combinations are obviously possible, but we suggest you roll out multiple deployment types sequentially rather than in parallel to manage project scope.
Define Your Policies
The last step before hitting the “on” switch is to configure your policies to match your deployment flavor.
In a single type deployment, either choose an existing category that matches the data type in your tool, or quickly build your own policy. In our experience, pre-built categories common in most DLP tools are almost always available for the data types that commonly drive a DLP project. Don’t worry about tuning the policy – right now we just want to toss it out there and get as many results as possible. Yes, this is the exact opposite of our recommendations for a traditional, focused DLP deployment.
In an information usage deployment, turn on all the policies or enable promiscuous monitoring mode. Most DLP tools only record activity when there are policy violations, which is why you must enable the policies. A few tools can monitor general activity without relying on a policy trigger (either full content or metadata only). In both cases our goal is to collect as much information as possible to identify usage patterns and potential issues.
Now it’s time to turn on your tool and start collecting results.
Don’t be shocked – in both deployment types you will see a lot more information than in a focused deployment, including more potential false positives. Remember, you aren’t concerned with managing every single incident, but want a broad understanding of what’s going on on your network, in endpoints, or in storage.
Analyze and PROFIT!
Now we get to the most important part of the process – turning all that data into useful information.
Once we collect enough data, it’s time to start the analysis process. Our goal is to identify broad patterns and identify any major issues. Here are some examples of what to look for:
- A business unit sending out sensitive data unprotected as part of a regularly scheduled job.
- Which data types broadly trigger the most violations.
- The volume of usage of certain content or files, which may help identify valuable assets that don’t cleanly match a pre-defined policy.
- Particular users or business units with higher numbers of violations or unusual usage patterns.
- False positive patterns, for tuning long-term policies later.
All DLP tools provide some level of reporting and analysis, but ideally your tool will allow you to set flexible criteria to support the analysis.
What Did We Achieve?
If you followed this process, by now you’ve created a base for your ongoing DLP usage while achieving valuable short-term goals. In a short amount of time you have:
- Established a flexible incident management process.
- Integrated with major infrastructure components.
- Assessed broad information usage.
- Set a foundation for later focused efforts and policy tuning to support long-term management.
Thus by following the Quick Wins process you can show immediate results while establishing the foundations of your program, all without overwhelming yourself by forcing unprepared action on all possible alerts before you understand information usage patterns.
Not bad, eh?
Posted at Thursday 18th March 2010 12:02 am
(0) Comments •
I’m about to commit the single most egotistical act of my blogging/analyst career. I’m going to make up my own law and name it after myself. Hopefully I’m almost as smart as everyone says I think I am.
I’ve been talking a lot, and writing a bit, about the intersection of security and psychology in security. One example is my post on the anonymization of losses, and another is the one on noisy vs. quiet security threats.
Today I read a post by RSnake on the effectiveness of user training and security products, which was inspired by a great paper from Microsoft: So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users.
I think we can combine these thoughts into a simple ‘law’:
The rate of user compliance with a security control is directly proportional to the pain of the control vs. the pain of non-compliance.
We need some supporting definitions:
- Rate of compliance equals the probability the user will follow a required security control, as opposed to ignoring or actively circumventing said control.
- The pain of the control is the time added to an established process, and/or the time to learn and implement a new process.
- The pain of non-compliance includes the consequences (financial, professional, or personal) and the probability of experiencing said consequences. Consequences exist on a spectrum – with financial as the most impactful, and social as the least.
- The pain of non-compliance must be tied to the security control so the user understands the cause/effect relationship.
I could write it out as an equation, but then we’d all make up magical numbers instead of understanding the implications.
Psychology tells us people only care about things which personally affect them, and fuzzy principles like “the good of the company” are low on the importance scale. Also that immediate risks hold our attention far more than long-term risks; and we rapidly de-prioritize both high-impact low-frequency events, and high-frequency low-impact events. Economics teaches us how to evaluate these factors and use external influences to guide widescale behavior.
Here’s an example:
Currently most security incidents are managed out of a central response budget, as opposed to business units paying the response costs. Economics tells us that we can likely increase the rate of compliance with security initiatives if business units have to pay for response costs they incur, thus forcing them to directly experience the pain of a security incident.
I suspect this is one of those posts that’s going to be edited and updated a bunch based on feedback…
Posted at Wednesday 17th March 2010 8:25 pm
(7) Comments •
By Mike Rothman
“WE HAVE MET THE ENEMY AND HE IS US.” POGO (1970)
I’ve worked for companies where we had to spend so much time fighting each other, the market got away. I’ve also worked at companies where internal debate and strife made the organization stronger and the product better. But there are no pure absolutes – as much as I try to be binary, most companies include both sides of the coin.
But when I read of the termination of Pennsylvania’s CISO because he dared to actually talk about a breach, it made me wonder – about everything. Dennis hit the nail on the head, this is bad for all of us. Can we be successful? We all suffer from a vacuum of information. That was the premise of Adam Shostack and Andrew Stewart’s book The New School of Information Security. That we need to share information, both good and bad, flattering and unflattering – to make us better at protecting stuff.
Data can help. Unfortunately most of the world thinks that security through obscurity is the way to go. As Adrian pointed out in Monday’s FireStarter, there isn’t much incentive to disclose anything unless an organization must – by law. The power of negative PR grossly outweighs the security benefit of information sharing. Which is a shame.
So what do you do? Give up? Well, actually maybe you do give up. Not on security in general, but on your organization. Every day you need to figure out if you can overcome the enemy within your four walls. If you can’t, then move on. I know, now is the wrong time to leave a job. I get that. But how long can you go in every day and get kicked in the teeth? Only you can decide that. But if your organization is a mess, don’t wait for it to get better.
If you do decide to stay, you need to discover the power of the peer group. Your organization will not sanction it, and don’t blame me, but find a local or industry group of peeps where you can share your dirt. You take a blood oath (just like in grade school) that what is spoken about in the group stays within the group and you spill the beans. You learn from what your peers have done, and they learn from you.
At this point we must acknowledge that widespread information sharing is not going to happen. Which sucks, but it is what it is. So we need to get creative and figure out an alternative means to get the job done. Find your peeps and learn from them.
Photo credit: “Pogo – Walt Kelly (1951) – front cover” originally uploaded by apophysis_rocks
Incite 4 U
Time to study marketing too… – RSnake is starting to mingle with some shady characters. Well, maybe not shady, but certainly on the wrong side of the rule of law. One of his conclusions is that it’s getting harder for the bad guys to do their work, at least the work of compromising meaty valuable targets. That’s a good thing. But the black hats are innovative and playing for real money, so they will figure something out and their models will evolve to continue generating profits. It’s the way of the capitalist. This idea of assigning a much higher value to a zombie within the network of a target makes perfect sense. It’s no different than how marketing firms charge a lot more for leads directly within the target market. So it’s probably not a bad idea for us security folks to study a bit of marketing, which will tell us how the bad guys will evolve their tactics. – MR
Lies, Damn Lies, and Exploits – We’ve all been hearing a ton about that new “Aurora” exploit (mostly because of all the idiots who think it’s the same thing as APT), but NSS Labs took a pretty darn interesting approach to all the hype. Assuming that every anti-malware vendor on the market would block the known Aurora exploit, they went ahead and tested the major consumer AV products against fully functional variants. NSS varied both the exploit and the payload to see which tools would still block the attack. The results are uglier than a hairless cat with a furball problem. Only one vendor (McAfee) protected against all the variants, and some (read the report yourself) couldn’t handle even the most minor changes. NSS is working on a test of the enterprise versions, but I love when someone ignites the snake oil. – RM
I hate C-I-A – Confidentiality, Integrity, and Availability is what it stands for. I was reminded of this reading this CIA Triad Post earlier today. Every person studying for their CISSP is taught that this is how they need to think about security. I always felt this was BS, along with a lot of other stuff they teach in CISSP classes, but that’s another topic. CIA just fails to capture the essence of security. Yeah, I have to admit that CIA represents three handy buckets that can compartmentalize security events, but they so missed the point about how one should approach security that I have become repulsed by the concept. Seriously, we need something better. Something like MSB. Misuse-Spoof-Break. Do something totally unintended, do something normal pretending to be someone else, or change something. Isn’t that a better way to think about security threats? It’s the “What can we screw with next?” triad. And push “denial of service” to the back of your mind. Script kiddies used to think it was fun, and some governments still do, but when it comes to hacking, it’s nothing more than a socially awkward cousin of the other three. – AL
Signatures in burglar alarm clothing – Pauldotcom, writing with his Tenable hat on, explains a method he calls “burglar alarms,” as a way to deflate some APT hype. This method ostensibly provides a heads-up on attacks we haven’t seen before. He uses this as yet another example of how to detect an APT. I know I’m not the sharpest tool in the shed, but I don’t see how identifying a set of events that should not happen, and looking for signs of their occurrence is any different than the traditional black list model used by our favorite security punching bags – IDS and AV. The list of things that should not happen is infinite. Literally. Yes, you use common sense and model the most likely things that shouldn’t happen, but in the end the list is too long and unwieldy, especially given today’s complex technology stack. Even better is his close: The way to catch the APTs is to meet them with unexpected defenses that they’ve never heard of before. I’m just wondering if I can buy the unexpected defense plug-in for Nessus on Tenable’s website. – MR
To tell the web filtering truth – You’ve got to applaud Bruce Green, COO for M86, for coming out and telling the truth: Internet filtering won’t prevent people deliberately looking for inappropriate material from accessing blocked content. Several British ISP’s are deploying content filtering on a massive scale to block ‘inappropriate material’ – obviously a euphemism for pr0n. M86, for those not aware, is the content security trifecta of 8e6, Marshal and Finjan, with a sprinkle of Avinti on top. They have a long track record of web content filtering in the education space. The Internet filtering trial was based upon M86’s technology and, like all filtering technologies, works exceptionally well under controlled environments when you do not take steps to avoid or conceal activity from the filters. But to Mr. Green’s point, those who are serious about their Internet ‘inappropriate material’ have dozens of ways to get around this type of filtering. What seems misleading about the study is to claim that they were “100% effective” in the ability to identify ‘inappropriate material’. But catching what you were expecting is unimpressive. As I understand the trial, they were not blocking, only identifying signatures. This means no one has had any reason to defeat the filters, because there was no need. At least M86 has no illusions about 100% success when they roll this out, and if nothing else they are going to get fantastic data on how to avoid Internet filtering. – AL
Leverage makes the rational security budget … more rational – Combine security skills, secure coding evangelism, a general disdain of most puffery, and a large dose of value economics, and you basically get Gunnar in a nutshell. He really nails it with this post about putting together a rational security budget. I suggest a similar model in the Pragmatic CSO, but the one thing Gunnar doesn’t factor in here (maybe because it’s a post and not a book) is the concept of leverage. I love the idea of thinking about security spend relative to IT spend, but the reality is a lot of the controls you’d need for each project can be used by the others. Thus leverage – pay once, and use across many. Remember, we have to work smarter since we aren’t getting more people or funding any time soon. So make sure leverage is your friend. – MR
Vapor Audits – I’ve been spending a lot more time lately focusing on cloud computing; partially because I think it’s so transformative that we are fools if we think it’s nothing new, and partially because it is a major driver for information-centric security. Even though we are still on the earliest fringes, cloud computing changes important security paradigms and methods of practice. Running a server in Amazon EC2? Want to hit it with a vulnerability scan? Oops – that’s against the terms of service. Okay, how about auditing which administrators touched your virtual server instance? Umm… not a supported feature. Audit, assessment, and assurance are major inhibitors to secure cloud computing adoption, which is why we all need to pay attention to the CloudAudit/A6 (Automated Audit, Assertion, Assessment, and Assurance API) group founded by Chris Hoff. if you care about cloud computing, you need to monitor or participate in this work. – RM
Learning HaXor skillz – Most of us are not l33t haXors, we are just trying to get through the day. The good news is there are lots of folks who have kung fu, and are willing to teach you what they know. The latest I stumbled upon is Mubix. He’s got a new site called Practical Exploitation, where the plan is to post some videos and other materials to teach the trade. Thus far there are two videos posted, one on leveraging msfconsole and the other on comparing a few tools for DNS enumeration. Good stuff here and bravo to Mubix. We need more resources like this. Hmmm, this could be a job for SecurosisTV… – MR
Posted at Wednesday 17th March 2010 7:00 am
(2) Comments •
By Adrian Lane
I ran into Slavik Markovich of Sentrigo, and David Maman of GreenSQL, on the vendor floor at the RSA Conference. I probably startled them with my negative demeanor – having just come from one vendor who seems to deliberately misunderstand preventative and detective controls, and another who thinks regular expression checks for content analysis are cutting edge. Still, we got to chat for a few minutes before rushing off to another product briefing. During that conversation it dawned on me that we continue to see refinement in the detection of malicious database queries and deployment methods to block database activity by database activity monitoring vendors. Not just from these vendors – others are improving as well.
For me, the interesting aspect is the detection methods being used – particularly how incoming SQL statements are analyzed. For blocking to be viable, the detection algorithms have to be precise, with a low rate of false positives (where have you heard that before?). While there are conceptual similarities between database blocking and traditional firewalls or WAF, the side effects of blocking are more severe and difficult to diagnose. That means people are far less tolerant of screw-ups because they are more costly, but the need to detect suspicious activity remains strong. Let’s take a look at some of the analytics being used today:
- Some tools block specific statements. For example, there is no need to monitor a ‘create view’ command coming from the web server. But blocking administrative use and alerting when remote administrative commands come into the database is useful for detection of problems.
- Some tools use metadata & attribute-based profiles. For example, I worked on a project once to protect student grades in a university database, and kill the connection if someone tried to alter the database contents between 6pm and 6am for an unapproved terminal. User, time of day, source application, affected data, location, and IP address are all attributes that can be checked to enforce authorized usage.
- Some tools use parameter signatures. The classic example is “1=1”, but there are many other common signatures for SQL injection, buffer overflow, and permission escalation attacks.
- Some tools use lexical analysis. This is one of the more interesting approaches to come along in the last couple of years. By examining the use of the SQL language, and the various structural options available with individual statements, we can detect anomalies. For example, there are many different options for the create table command on Oracle, but certain combinations of delimiters or symbols can indicate an attempt to confuse the statement parser or inject code. In essence you define the subset of the query language you will allow, along with suspicious variations.
- Some tools use behavior. For example, while any one query may have been appropriate, a series of specific queries indicates an attack. Or a specific database reference such as a user account lookup may be permissible, but attempting to select all customer accounts might not be. In some cases this means profiling typical user behavior, using statistical analysis to quantify unusual behavior, and blocking anything ‘odd’.
- Some tools use content signatures. For example, looking at the content of the variables or blobs being inserted into the database for PII, malware, or other types of anomalous content.
All these analytical options work really well for one or two particular checks, but stink for other comparisions. No single method is best, so having multiple options allows choosing the best method to support each policy.
Most of the monitoring solutions that employ blocking will be deployed similarly to a web application firewall: as a stand-alone proxy service in front of the database, an embedded proxy service that is installed on the database platform, or as an out-of-band monitor that kills suspicious database sessions. And all of them can be deployed to monitor or block. While the number of companies that use database activity blocking is miniscule, I expect this to grow as people gradually gain confidence with the tools in monitoring mode.
Some vendors employ two detection models, but it’s still pretty early, so I expect we will see multiple options provided in the same way that Data Loss Prevention (DLP) products do. What really surprises me is that the database vendors have not snapped up a couple of these smaller firms and incorporated their technologies directly into the databases. This would ease deployment, either as an option for the networking subsystem, or even as part of the SQL pre-processor. Given that a single database installation may support multiple internal and external web applications, it’s very dangerous to rely on applications to defend against SQL injection, or to place to much faith in the appropriateness of administrative commands reaching the database engine. ACLs are particularly suspect in virtualized and cloud environments.
Posted at Tuesday 16th March 2010 10:08 pm
(0) Comments •
In Part 1 of this series on Low Hanging Fruit: Quick Wins with DLP, we covered how important it is to get your process in place, and the two kinds of violations you should be immediately prepared to handle. Trust us – you will see violations once you turn your DLP tool on.
Today we’ll talk about the last two pieces of prep work before you actually flip the ‘on’ switch.
Prepare Your Directory Servers
One of the single most consistent problems with DLP deployments has nothing to do with DLP, and everything to do with the supporting directory (AD, LDAP, or whatever) infrastructure. Since with DLP we are concerned with user actions across networks, files, and systems (and on the network with multiple protocols), it’s important to know exactly who is committing all these violations. With a file or email it’s usually a straightforward process to identify the user based on their mail or network logon ID, but once you start monitoring anything else, such as web traffic, you need to correlate the user’s network (IP) address back to their name.
This is built into nearly every DLP tool, so they can track what network addresses are assigned to users when they log onto the network or a service.
The more difficult problem tends to be the business process; correlating these technical IDs back to real human beings. Many organizations fail to keep their directory servers current, and as a result it can be hard to find the physical body behind a login. It gets even harder if you need to figure out their business unit, manager, and so on.
For a quick win, we suggest you focus predominantly on making sure you can track most users back to their real-world identities. Ideally your directory will also include role information so you can filter DLP policies violations based on business unit. Someone in HR or Legal usually has authorization for different sensitive information than people in IT and Customer Service, and if you have to manually figure all this out when a violation occurs, it will really hurt your efficiency later.
Integrate with Your Infrastructure
The last bit of preparation is to integrate with the important parts of your infrastructure. How you do this will vary a bit depending on your initial focus (endpoint, network, or discovery). Remember, this all comes after you integrate with your directory servers.
The easiest deployments are typically on the network side, since you can run in monitoring mode without having to do too much integration. This might not be your top priority, but adding what’s essentially an out of band network sniffer is very straightforward. Most organizations connect their DLP monitor to their network gateway using a SPAN or mirror port. If you have multiple locations, you’ll probably need multiple DLP boxes and have to integrate them using the built-in multi-system management features common to most DLP tools.
Most organizations also integrate a bit more directly with email, since it is particularly effective without being especially difficult. The store-and-forward nature of email, compared to other real-time protocols, makes many types of analysis and blocking easier. Many DLP tools include an embedded mail server (MTA, or Mail Transport Agent) which you can simply add as another hop in the email chain, just like you probably deployed your spam filter.
Endpoint rollouts are a little tougher because you must deploy an agent onto every monitored system. The best way to do this (after testing) is to use whatever software deployment tool you currently use to push out updates and new software.
Content discovery – scanning data at rest in storage – can be a bit tougher, depending on how many servers you need to scan and who manages them. For quick wins, look for centralized storage where you can start scanning remotely through a file share, as opposed to widely distributed systems where you have to manually obtain access or install an agent. This reduces the political overhead and you only need an authorized user account for the file share to start the process.
You’ll notice we haven’t talked about all the possible DLP integration points, but instead focused on the main ones to get you up and running as quickly as possible. To recap:
- For all deployments: Directory services (usually your Active Directory and DHCP servers).
- For network deployments: Network gateways and mail servers.
- For endpoint deployments: Software distribution tools.
- For discovery/storage deployments: File shares on the key storage repositories (you generally only need a username/password pair to connect).
Now that we are done with all the prep work, in our next post we’ll dig in and focus on what to do when you actually turn DLP on.
Posted at Monday 15th March 2010 10:44 pm
(0) Comments •
By Adrian Lane
On Monday March 1st, the Experienced Security Professionals Program (ESPP) was held at the RSA conference, gathering 100+ practitioners to discuss and debate a few topics. The morning session was on “The Changing Face of Cyber-crime”, and discussed the challenges facing law enforcement to prosecute electronic crimes, as well as some of the damage companies face when attackers steal data. As could be expected, the issue of breach disclosure came up, and of course several corporate representatives pulled out the tired argument of “protecting their company” as their reason to not disclose breaches. The FBI and US Department of Justice representatives on the panel referenced several examples where public firms have gone so far as to file an injunction against the FBI and other federal entities to stop investigating breaches. Yes, you read that correctly. Companies sued to stop the FBI from investigating.
And we wonder why cyber-attacks continue? It’s hard enough to catch these folks when all relevant data is available, so if you have victims intentionally stopping investigations and burying the evidence needed for prosecution, that seems like a pretty good way to ensure criminals will avoid any penalties, and to encourage attackers to continue their profitable pursuits at shareholder expense. The path of least resistance continues to get easier.
Let’s look past the murky grey area of breach disclosure regarding private information (PII) for a moment, and just focus on the theft of intellectual property. If anything, there is much less disclosure of IP theft, thanks to BS arguments like – “It will hurt the stock price,” or “We have to protect the shareholders.” or “Our responsibility is to preserve shareholder value.” Those were the exact phrases I heard at the ESPP event, and they made my blood boil. All these statements are complete cop-outs, motivated by corporate officers’ wish to avoid embarrassment and potential losses of their bonuses, as opposed to making sure shareholders have full and complete information on which to base investment decisions.
How does this impact stock price? If IP has been stolen and is being used by competitors, it’s reasonable to expect the company’s performance in the market will deteriorate over time. R&D advances come at significant costs and risks, and if that value is compromised, the shareholders eventually lose. Maybe it’s just me, but that seems like material information, and thus needs to be disclosed. In fact, not disclosing this material information to shareholders and providing sufficient information to understand investment risks runs counter to the fiscal responsibility corporate officers accept in exchange for their 7-figure paychecks. Many, like the SEC and members of Congress, argue that this is exactly the kind of information that is covered by the disclosure controls under Section 302 of Sarbanes-Oxley, which require companies to disclose risks to the business.
That said, I understand public companies will not disclose breaches of IP. It’s not going to happen. Despite my strong personal feelings that breach notification is essential to the overall integrity of global financial markets, companies will act in their own best interests over the short term. Looking beyond the embarrassment factor, potential brand impact, and competitive disadvantages, the single question that foils my idealistic goal of full disclosure is: “How does the company benefit from disclosure?”
That’s right – it’s not in the company own interest to disclose, and unless they can realize some benefit greater than the estimated loss of IP (Google’s Chinese PR stunt, anyone?), they will not disclose. Public companies need to act according to their own best interests. It’s not noble – in fact it’s entirely selfish – but it’s a fact. Unless there are potential regulatory losses due to not disclosing, since the company will already suffer the losses due to the lost IP, there is no upside to disclosing and disclosure probably only increases the losses. So we are at an impasse between what is right and what is realistic. So how to do we fix this? More legislation? A parade down Wall Street for those admitting IP theft? Financial incentives? Help a brother out here – how can we get IP breach disclosure, and get it now?
Posted at Monday 15th March 2010 2:09 pm
(4) Comments •
I love the week after RSA. Instead of being stressed to the point of cracking I’m basking in the glow of that euphoria you only experience after passing a major milestone in life.
Well, it lasted almost a full week – until I made the mistake of looking at my multi-page to-do list.
RSA went extremely well this year, and I think most of our pre-show predictions were on the money. Not that they were overly risky, but we got great feedback on the Securosis Guide to RSA 2010, and plan to repeat it next year. The Disaster Recovery Breakfast also went extremely well, with solid numbers and great conversation (thanks to Threatpost for co-sponsoring).
Now it’s back to business, and we need your help. We are currently running a couple concurrent research projects that could use your input.
For the first one, we are looking at the new dynamics of the endpoint protection/antivirus market. If you are interested in helping out, we are seeking for customer references to talk about how your deployments are going. A big focus is on the second-layer players like Sophos, Kaspersky, and ESET; but we also want to talk to a few people with Symantec, McAfee, and Trend.
We are also looking into application and database encryption solutions – if you are on NuBridges, Thales, Voltage, SafeNet, RSA, etc… and using them for application or database encryption support, please drop us a line.
Although we talk to a lot of you when you have questions or problems, you don’t tend to call us when things are running well. Most of the vendors supply us with some clients, but it’s important to balance them out with more independent references.
If you are up for a chat or an email interview, please let us know at firstname.lastname@example.org or one of our personal emails. All interviews are on deep background and never revealed to the outside world. Unless Jack Bauer or Chuck Norris shows up. We have exemptions for them in all our NDAs.
Er… I suppose I should get to this week’s summary now…
But only after we congratulate David Mortman and his wife on the birth of Jesse Jay Campbell-Mortman!
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Garry, in response to RSA Tomfoolery: APT is the Fastest Way to Identify Fools and Liars.
APT = China, and we (people who have serious jobs) can’t say bad things about China.
That pretty much covers it, yes?
Posted at Friday 12th March 2010 4:15 am
(2) Comments •
Two of the most common criticisms of DLP that comes up in user discussions are a) its complexity and b) the fear of false positives. Security professionals worry that DLP is an expensive widget that will fail to deliver the expected value – turning into yet another black hole of productivity. But when used properly DLP provides rapid assessment and identification of data security issues not available with any other technology.
I don’t mean to play down the real complexities you might encounter as you roll out a complete data protection program. Business use of information is itself complicated, and no tool designed to protect that data can simplify or mask the underlying business processes. However, there are steps you can take to obtain significant immediate value and security gains without blowing your productivity or wasting important resources.
Over the next few posts I’ll highlight the lowest hanging fruit for DLP, refined in conversations with hundreds of DLP users. These aren’t meant to incorporate the entire DLP process, but to show you how to get real and immediate wins before you move on to more complex policies and use cases.
Establish Your Process
Nearly every DLP reference I’ve talked with has discovered actionable offenses committed by employees as soon as they turn the tool on. Some of these require little more than contacting a business unit to change a bad process, but quite a few result in security guards escorting people out of the building, or even legal action. One of my favorite stories is the time the DLP vendor plugged in the tool for a lunchtime demonstration on the same day a senior executive decided to send proprietary information to a competitor. Needless to say, the vendor lost their hard drives that day, but they didn’t seem too unhappy.
Even if you aren’t planning on moving straight to enforcement mode, you need to put a process in place to manage the issues that will crop up once you activate your tool. The kinds of issues you need to figure out how to address in advance fall into two categories:
- Business Process Failures: Although you’ll likely manage most business process issues as you roll out your sustained deployment, the odds are high some will be of such high concern they will require immediate remediation. These are often compliance related.
- Egregious Employee Violations: Most employee-related issues can be dealt with as you gradually shift into enforcement mode, but as in the example above, you will encounter situations requiring immediate action.
In terms of process, I suggest two tracks based on the nature of the incident. Business process failures usually involve escalation within security or IT, possible involvement of compliance or risk management, and engagement with the business unity itself. You are less concerned with getting someone in trouble than stopping the problem.
Employee violations, due to their legal sensitivity, require a more formal process. Typically you’ll need to open an investigation and immediately escalate to management while engaging legal and human resources (since this might be a firing offense). Contingencies need to be established in case law enforcement is engaged, including plans to provide forensic evidence to law enforcement without having them walk out the door with your nice new DLP box and hard drives. Essentially you want to implement whatever process you already have in place for internal employee investigations and potential termination.
In our next post we’ll focus more on rolling out the tool, followed by how to configure it for those quick wins I keep teasing you with.
Posted at Thursday 11th March 2010 9:49 pm
(0) Comments •
By Adrian Lane
Tuesday, March 16th at 11am PST / 2pm EST, I will be presenting a webinar: “Understanding and Selecting a Database Assessment Solution” with Application Security, Inc. I’ll cover the basic value proposition of database assessment, several use cases, deployment models, and key technologies that differentiate each platform; and then go through a basic product evaluation process.
You can sign up for the webinar here. The applicability of database assessment is pretty broad, so I’ll cover as much as I can in 30 minutes. If I gloss over any areas you are especially interested in, we will have 10 minutes for Q&A. Or you can send questions in ahead of time and I will try to address them within the slides, or you can submit a question in the GoToMeeting chat facility during the presentation.
Posted at Thursday 11th March 2010 7:00 pm
(0) Comments •
By Adrian Lane
Patching is a critical security operation for databases, just like for any other application. The vast majority of security concerns and logic flaws within the database will be addressed by the database vendor. While the security and IT communities are made aware of critical security flaws in databases, and may even understand the exploits, the details of the fix are never made public except for open source databases. That means the vendor is your only option for fixes and workarounds. Most of you will not be monitoring CVE notifications or penetration testing new versions of the database as they are released. Even if you have the in-house expertise do so, very very very few people have the time to conduct serious investigations. Database vendors have dedicated security teams to analyze attacks against the database, and small firms must leverage their expertise.
Project Quant for Patch Management was designed to break down patch management into essential, discreet functions, and assign cost-based metrics to each task in order to provide a quantitative measurements of the patch management process. In order to achieve that goal, we needed to define a patch management process on which to build the metrics model. For database patch management, you could choose to follow that process and feel confident that it addresses all relevant aspects of patching a database system. However, that process is far too comprehensive and involved for a series on database security fundamentals.
As this series is designed more for small and mid-market practitioners, who generally lack the time and tools necessary for more thorough processes, we are going to avoid the depth of coverage major enterprises require. I will follow our basic Quant model, but use a subset of the process defined in the original Project Quant series. Further, I will not assume that you have any resources in place when you begin this effort – we will define a patching process from scratch.
- Establish Test Environment: Testing a patch or major database revision prior to deployment is not optional. I know some of you roll patches out and then “see what happens”, rolling back when problems are found. This is simply not a viable way to proceed in a production environment. It’s better to patch less often than deploy without functional sanity and regression tests. To start set up a small test database environment, including a sample data set and test cases. This can be anything from leveraging quality assurance tests, to taking database snapshots and replaying network activity against the database to simulate real loads, or using a virtual copy of the database and running a few basic reports. Whatever you choose, make sure you have set aside a test environment, tests, and tools as needed to perform basic certification. You can even leverage development teams to help define and run the tests if you have those groups in house.
- Acquire Patch: Odds are, in a small IT operation, you only need to worry about one or perhaps two types of databases. That means it is relatively easy to sign up with the database vendors to get alerts when patches are going to be available. Vendors like Oracle have predictable patch release cycles, which makes it way easier to plan ahead, and allocate time and resources to patching. Review the description posted prior to patch availability. Once the patch is available, download and save a copy outside the test area so it is safely archived. Review the installation instructions so you understand the complexities of the process and can allocate the appropriate amount of time.
- Test & Certify: A great thing about database patches is that their release notes describe which functional areas of the database are being altered, which helps to focus testing. Install the patch, re-configure if necessary, and restart the database. Select the test scripts that cover patched database functions, and check with quality assurance groups to see if there are new tests available or automation scripts that go along with them. Import a sample data set and run the tests. Review the results. If your company has a formal acceptance policy share the results; otherwise move on to the next step. If you encounter a failure, determine if the cause was the patch or the test environment, and retest if needed. Most small & mid-sized organizations respond to patch problems by filing a bug report with the vendor, and work stops. If the patch addresses a serious loss of functionality, you may be able to escalate the issue with the vendor. Otherwise you will probably wait for the next patch to address the issue.
- Deploy & Certify: Following the same steps as the testing phase, install the patch, reconfigure, and restart the database as needed. Your ability to test production databases for functionality will be limited, so it is recommend to run one or two critical functions to ensure they are operational, or have your internal users exercise some database functions to provide a sanity check that everything is working.
- Clean up & Document: Trust me on this – anything special you did for the installation of the patch will be forgotten the next time you need those details. Anything you suspect may be an issue in the future, will be. Save the installation downloads and documentation provided by the vendor so you can refer back to them in the future, and to keep a backup in case you need to fall back to this revision in the future. You may even want to save a copy of your test results for future review, which is handy for backtracking future problems.
I know that this cycle looks simple – it is intended to be. I am surprised by both how many people are unwilling to regularly patch database environments due to fear of possible side-effects, and also by how disorganized patching efforts are when people do patch databases. A lot of that has to do with lack of process and established testing; most DBAs have crystal-clear memories of cleaning up after bad patch deployments, along with a determination to avoid repeating that particular nightmare. There are plenty of new database administrators out there who struggle with patching, so this simple process is intended to help them figure out a reasonable process, and avoid the standard pitfalls. In most cases the initial installation and testing can be completed in an afternoon, with the actual rollout dependent on the number of databases and ability to take them offline for maintenance. If you have not gone through this cycle before it will be a little awkward the first time, but it gets a easier each time you go through the process. The key is the availability of a proper test environment, with sample functional and regression tests. Without a suitable test environment, testing fails and patching blows up.
Posted at Wednesday 10th March 2010 5:49 pm
(0) Comments •
By Mike Rothman
To stir the pot a bit before the RSA Conference, I did a FireStarter wondering out loud if social media would ever replace big industry conferences. Between the comments and my experiences last week, I’d say no. Though I can say social media provides the opportunity to make business acquaintances into friends and let loudmouths like Rich, Adrian and myself make a living having on an opinion (often 3 or 4 between us).
So I figured this week, I’d do a Top 10 list of things I can’t do on Twitter, which will keep me going to the RSA Conference as long as they keep letting me in.
- This is your life – Where else can I see 3 CEOs who fired me in one room (the AGC conference)? Thankfully I left my ice pick in the hotel room that morning.
- Everybody knows your name – Walk into the W Hotel after 9pm, and if you’ve been in the business more than a week, odds are you’ll see plenty of people you know.
- Trend spotting – As we expected, there was lots of APT puffery at the show, but I also saw lots of activity on wireless security – that was mildly surprising. And group conversations provided additional unexpected perspectives. Can’t do that on Twitter.
- Evasive maneuvers – To save some coin, I don’t stay in the fancy hotels. But that means you have to run the panhandler gauntlet between the parties and the hotel. I was a bit out of practice, but escaped largely unscathed.
- Rennaissance security folks – It seems lots of security folks are pretty adept at some useful skills. Like procuring entire bottles of top shelf liquor at parties. Yes, very useful indeed.
- Seeing the sights – I know Shimmy doesn’t like booth babes, but that’s his problem. I thought I took a wrong turn when I got to the Barracuda party and ended up at the Gold Club, though I was happy I had a stack of $1s in my pocket.
- Making new friends – The fine folks at SafeNet held a book signing for The Pragmatic CSO at the show. I got to meet lots of folks and they even got to take home copies. Can’t do that on Twitter either.
- Splinter conferences – Given the centralization of people that go to RSA, a lot of alternative gatherings happen during RSA week. Whether it’s BSides, Cloud Security Alliance, Metricon, AGC, or others, most folks have alternatives to RSA Conference panel staples.
- Recovery Breakfast – Once again, we held our Disaster Recovery Breakfast and it was the place to be on Thursday morning. A who’s who of security royalty passed through to enjoy the coffee, bloody mary’s, and hot tasty breakfast. Thanks to Threatpost for co-sponsoring with us.
- Elfin underwear – Where else can your business partner pull down his pants in front of 500 people and not get put in the slammer? That’s right, RSA. Check it out – it was really funny.
So in a nutshell, from an educational standpoint I’m not sure spending a week at the RSA Conference makes sense for most practitioners. But from a networking and fun perspective, it remains the best week of the year. And thankfully I have 12 months to dry out and rest my liver for next year’s show.
Photo credit: “Frank Chu Bsides SF” originally uploaded by my pal St0rmz
Incite 4 U
Ah, digging out from under the RSA mayhem is always fun. There was lots to see, many meaningless announcements, and plenty of shiny objects. Here is a little smattering of stuff that happened at the show, as well as a few goodies not there.
AP(ressure)T Explained – As Rich pointed out, APT was in full swing last week at RSA and Richard Bejtlich has been calling out folks with extreme malice for this kind of behavior – which we all think is awesome. But to really understand the idiocy, you need to relate it to something you can understand. Which is why I absolutely loved Richard’s analogy of how martial arts folks dealt with a new technique based on pressure points. Read this a post a few times and it will click. Folks either jump on the bandwagon or say the bandwagon is stupid. Not many realize something new and novel is happening and act accordingly. – MR
Patch Tuesday, Exploit Monday – You have to feel for the guys in the Microsoft security center. They line up their latest patch set, and some bad guys blow it by attacking unpatched vulnerabilities before Microsoft can include them in the latest release. I’m a big fan of the Patch Tuesday cycle, but that means anything released on “Exploit Wednesday” or even close to Patch Tuesday potentially has a month to run before Microsoft can fix it. MS is pretty good at releasing out of band patches if something is being widely exploited, and they’re the ones providing the warning, but it makes me long for the days when an 0day was so rare as to be nearly mythical. This latest attack hits IE 6 and 7 on various platforms, and you can mitigate with a content filtering gateway or an alternative browser, or by following some suggestions in the linked article (setting IE security zone settings to High). – RM
Creating the Insecurity Index – If we know that your A/V and anti-malware only catch 20% of malicious code, or your firewall only blocks 20%, and your WAF only blocks 60% of application flaws, and so on, can we create some meaningful metrics on application security FAIL? Kind of a Mean Time Between Failure analysis for IT? I got to thinking about this when talking to Kelly Jackson Higgins at RSA about her post on Dark Reading regarding application testing, which found that 60% of applications they tested remained vulnerable. To me this is not a surprise at all, given that most adopt a security model to surround applications with add-on services and appliances to protect the application from the nasty attackers and viruses rather than fix the code itself. For most large organizations the amount of work necessary to fix their crappy code would be monumental, and a rewrite would mean years of development time. I have never fully bought into that idea, and given that most open source projects are very large and still manage to fix flaws within the code, Veracode’s report does support the idea that we should dedicate more resources to development. Survey results do expose how much organization rely upon internally developed web-based applications and, quite frankly, how bad most of them are in terms of security. Still, I wonder how people will react to this data, and whether it will change the amount of in-house development, or how they develop. – AL
Finding Value in Site Certifications – We’ve all made fun of these $99 web site certifications that ‘prove’ security. Most aren’t any better than ScanlessPCI. But that isn’t stopping all sorts of folks from trying to get at the market that ScanAlert, now McAfee (through its HackerSafe offering) pioneered. You had Qualys, Dasient, and VeriSign talking about their new programs. But a word to the wise: make sure your lawyers are all over whatever claims of security go along with the marketing puffery of these services. ControlScan found this out the hard way. Good job by Raf of digging into the settlement and what it means. – MR
Selling the Conference – One of the under-appreciated aspects of professional conferences is employee retention and motivation. We talk about the need for education, which can certainly be a benefit, but it’s secondary IMHO. As a VP of Engineering, getting budget to send employees to conferences, or finding local conferences that were free, was a priority. As opposed to this poor sap mentioned on the InfoSec Leaders blog, my folks never really had to make the case to go to a conference, since I’d make it for them. Whatever productivity was lost during the sessions was more than made up for in the subsequent days (yes, days) after the conference. My team member usually learned something – perhaps a new technology, or in some cases what not to do – from the lectures. And usually they learned about new technologies in the sessions, and met new peers outside. In all cases I saw enthusiasm and renewed interest in their careers. Maybe it is the change of scenery, maybe it is thinking about new things related to work but outside the office, that stirs creativity and interest. I am not sure exactly how it works, but productivity and retention were motivators enough to send people. I know times are tough, and we all know that some conferences are very expensive, but I have seen the benefits conferences provide above and beyond those ‘team building’ miniature golf events and the like that HR is so fond of organizing. Give it a try and see for yourself! – AL
Pr0n Stick Is on the Way – I realize that 90% of the time my mind is in the gutter. I’m not sure which gutter at any given time, but, well, you know… So when I see IronKey and TrueCrypt partnering up for a trusted OS on a stick, and then Check Point announcing something similar (with more enterprise control), my first thought is how this is a boon to all those folks trying to peruse sites they shouldn’t be from machines they don’t own. But that’s just me. The use case for this is pretty compelling, especially for folks who embrace desktop virtualization and have folks accessing sensitive data in sleazy places. Which kind of proves a hypothesis I’ve been playing with: security innovation will be driven by things that can be sold to the DoD or makes pr0n more accessible. That’s a lot different than the last 10 years. – MR
Numbers Good. Nom Nom Nom – I like numbers. I mean I really like good information security numbers, and they are few and far between. I’m not talking the risk/threat garbage that’s mostly imaginary, but hard data to help support our security decisions. That’s why I think it’s so great that Verizon is releasing their Incident Sharing Framework that I even joined their advisory board (a non-compensated, no-conflicts outside position). I’ve written and spoken about the Data Breach Investigations Report before, as well as Trustwave’s similar report and the Dataloss Database. We do a terrible job in security of sharing information, because we’re worried about the consequences of breaches becoming public. But without that sharing, we don’t have a chance of properly orienting our security controls. Tools like these reports and the Incident Sharing Framework help us gather real-world information without exposing individual organizations to the consequences of going public. – RM
The First SIEM Domino Falls – Finally it seems the long awaited consolidation in the SIEM and Log Management business is starting. Many of us have been calling for this for a while, but the only deals to date were a couple years ago (Cisco/Protego, RSA/Network Intelligence) or strictly technology deals (TripWire/ActiveWorx). But now TrustWave continues its bargain basement shopping spree by acquiring Intellitactics. At first glance you say, “Intellitactics? Really?” but then after looking at it, you realize TrustWave just needs “something” in the space. They do a lot of PCI work, so having Requirement 10 in a box (or packaged as a service) isn’t a bad thing. Truth be told, most companies aren’t pushing on these solutions more than gathering some logs and pumping out a report for the auditor. Good enough probably is. Also factor in the price tag (a reported $20 million in stock), and you don’t have to sell too much to make the deal pay. But let’s be clear: there will be a number of transactions in the space this year, at least if the conversations I had at RSA about potential targets were any indication. – MR
You Want Fries with That? – I am not quite clear on Anton’s motivation for “An Analyst in the box, Part II”. And it’s not clear who this rant is aimed at. Yeah, it’s tongue in cheek, but who in the vendor community has not gone through this before? I have seen customers ask for technologies that they knew we could not provide. Sometimes it was to see if we were really reading the RFP. Sometimes it was to see if we would lie to them. Sometimes it was to see if we would push back and tell them it was not important. Sometimes they would ask about technology purely because they had experience with it or were interested in it, yet it was completely unrelated to the project that motivated their research. Sometimes customers ask because they are whiny and want someone to commiserate with them on how hard their job is, and think every vendor is obliged to be a good listener in order to earn the business. I know when I evaluated products, for the prices some vendors charged I expected their appliances keep me secure – but also to wash, fold, and iron my laundry as well. Whatever the reason may be, “Analyst in the box” is just part of the game. Suck it up and cash the check. – AL
Posted at Wednesday 10th March 2010 6:59 am
(1) Comments •
By Mike Rothman
As I’ve been digesting all I saw and heard last week at the RSA show, the major topic of wireless security re-emerged with a vengeance. To be honest, wireless security had kind of fallen off my radar for a while. Between most of the independent folks being acquired (both on the wireless security and wireless infrastructure sides) and lots of other shiny objects, there just wasn’t that much to worry about.
We all know retailers remained worried (thanks, Uncle TJX!) and we saw lots of folks looking to segregate guest access from their branch networks when offering wireless to customers or guests. But WEP was dead and buried (right?) and WPA2 seemed reasonably stable. What was left to worry about?
As with everything else, at some point folks realized that managing all these overlay networks and maintaining security is a pain in the butt. So the vendors inevitably get around to collapsing the networks and providing better management – which is what we saw at RSA.
Cisco puffed its chest out a bit and announced its Security Without Borders strategy, which sounds like someone over there overdosed on some Jack Welch books (remember borderlessness?). Basically they are finally integrating their disparate security devices, pushing the IronPort and ASA boxes to talk to each other, and adding some stuff to the TrustSec architecture.
In concept, being able to enable business users to access information from any device and any location with a high degree of ease and security sounds great. But the devil is in the details, which makes this feels a lot like the “self-defending network.” Great idea, not so hot on delivery. So if you have Cisco everywhere and can be patient, the pieces are there. But if you work in a heterogeneous world or have problems today, then this is more slideware from Cisco.
On the other side of the coin, you have the UTM vendors expanding from their adjacent markets. Both Fortinet and Astaro made similar announcements about entering the wireless infrastructure market. Given existing strength in the retail market, it makes sense for UTM vendors to introduce thin access points, moving management intelligence to (you guessed it) their UTM gateways.
Introducing and managing wireless security policy from an enterprise perspective is a no-brainer (rogue access points die die die), though there isn’t much new here. The wireless infrastructure folks have been doing this for a while (at a cost, of course). The real barrier to success here isn’t technology, it’s politics. Most network folks like to buy gear from network companies, so will it be the network team or the security team defining the next wave of wireless infrastructure roll-out?
My bet is on the network team, which means “secure wireless” will prevail eventually. I suspect everyone understands security must be a fundamental part of networks, data centers, endpoints, and applications, but that’s not going to happen any time soon. Rugged or not. This provides an opening for companies like Fortinet and Astaro. But to be clear, they have to understand they are selling to different customers, where they have very little history or credibility.
And since the security market still consists mostly of lemmings, I suspect you’ll see a bunch more wireless security activity over the next few months as competitors look to catch up with Cisco’s slideware.
Posted at Tuesday 9th March 2010 10:00 pm
(0) Comments •
By Mike Rothman
We’re happy to post the next SecurosisTV episode, in which yours truly goes through the Low Hanging Fruit of Endpoint Security. This is a pretty high-level view of the 7 different tactics (discussed in much more detail in the post), intended to give you a quick (6 minute) perspective on how to improve endpoint security posture with minimal effort.
Direct Link: http://blip.tv/file/3281010
See it on YouTube: http://www.youtube.com/watch?v=jUIwjc5jwN8
Yes, we know embedding a video is not NoScript friendly, so for each video we will also include a direct link to the page on
blip.tv and on YouTube. We just figure most of you are as lazy as we are, and will appreciate not having to leave our site.
We’re also learning a lot about video production with each episode we do. Any comments you have on the video would be much appreciated. Whether it’s valuable, what we can do to improve the quality (besides getting new talent), and any other feedback you may have.
Posted at Tuesday 9th March 2010 7:57 pm
(0) Comments •
It is better to stay silent and let people think you are an idiot than to open your mouth and remove all doubt.
Although we expected APT to be the threat du jour at RSA, I have to admit even I was astounded at the outlandish displays of idiocy and outright deception among pundits and the vendor community.
Now, let’s give credit where credit is due – only a minority of vendors hopped on the APT bandwagon. This post isn’t meant to be a diatribe against the entire product community, only those few who couldn’t help themselves in the race to the bottom.
I’m not claiming to be an expert in APT, but at least I’ve worked with organizations struggling with the problem (starting a few years ago when I began to get data security calls related to the problems of China-related data loss). The vast majority of the real experts I’ve met on the topic (those with direct experience) can’t really talk about it in public, but as I’ve mentioned before I’d sure as heck read Richard Beijtlich if you have any interest in the topic. I also make a huge personal effort to validate what little I say with those experts.
Most of the APT references I saw at RSA were ridiculously bad. Vendors spouting off on how their product would have blocked this or that malware version made public after the fact. Thus I assume any of them talking about APT were either deceptive, uninformed, or stupid.
All this was summarized in my head by one marketing person who mentioned they were planning on talking about “preventing” APT (it wasn’t in their materials yet) because they could block a certain kind of outbound traffic. I explained that APT isn’t merely the “Aurora” attack and is sort of the concerted espionage efforts of an entire country, and they responded, “oh – well our CEO heard about it and thought it was the next big thing, so we should start marketing on it.”
And that, my friends, is all you need to know about (certain) vendors and APT.
Posted at Monday 8th March 2010 5:44 pm
(4) Comments •
By Mike Rothman
Rich, Mike, and Adrian keep pretty busy schedules at RSA each year, so we are likely to be quiet on the blog this week. If you happen to be at the show, here are the speaking sessions and other appearances we’ll be doing throughout the week. Hopefully you’ll come up and say “Hi.” Rich and Adrian don’t bite.
- STAR-106: Security Groundhog Day – Third Time’s a Charm – Mike and Rich (Tuesday, March 2 @ 1pm)
- EXP-108: Winnovation – Security Zen through Disruptive Innovation and Cloud Computing – Rich and Chris Hoff (Tuesday, March 2 @ 3:40pm)
- END-203: How to Expedite Patching in the Enterprise? A View from the Trenches – Rich (Wednesday, March 3 @ 10:40 AM)
- P2P-304A: Security Posture: Wading Through the Hype… – Mike (Thursday, March 4 @ 1pm)
- DAS-403: Securing Enterprise Databases – Adrian (Friday, March 5 @ 11:20am)
- America’s Growth Capital Conference: Mike will be roaming around the AGC conference for portions of Monday. The event is taking place at the Westin San Francisco on Market Street. You need an invite to this one.
- RSA Conference Experienced Security Professionals Program: All of us will be at this event (you need to have pre-registered) at the Moscone on Monday as well.
- Security Blogger Meet Up: Securosis will be at the 3rd annual Security Blogger Meet Up at the classified location. You need to have a blog and be pre-registered to get in.
- Securosis and Threatpost Disaster Recovery Breakfast: Once again this year Securosis will be hosting the Disaster Recovery Breakfast on Thursday, March 4 between 8 and 11. RSVP and enjoy a nice quiet breakfast with plenty of food, coffee, recovery items (aspirin & Tums), and even the hair of the dog for those of you not quite ready to sober up.
- PechaKucha (PK) Happy Hour: Rich will be presenting at the PK Happy Hour on Thursday, March 4 between 5 and 6:30 pm in the Crypto Commons. See if he can get through 20 slides in about 6 1/2 minutes. Fat chance, but Rich is going to try.
Posted at Monday 1st March 2010 4:00 pm
(0) Comments •