Securosis

Research

Incite 5/16/2012: Moving up Day

Wasn’t it just yesterday that we put XX1 on the bus for her first day of kindergarten? I guess if yesterday was August of 2006, that would be correct. Man, six years have gone by fast! On Friday she moves up to Middle School. As we watched the annual Field Day festivities with all the kids dressed up in their countries’ garb yesterday, the kindergartners seemed so small. And they are. Six years doesn’t seem so long, but against the growth of such a child it’s a lifetime. I have to say I’m proud of my oldest girl. She did very well in elementary school, and is ready to tackle 7 different teachers and a full boat of advanced classes next year. Of course there will be stumbles and challenges and other learning experiences. As my army buddies say, “she has an opportunity to excel.” Despite our desire to make time slow down, it’s not going to happen. She’s ready for the next set of experiences and to continue on her path. Whether we like it or not. Whether we are ready or not. We have heard story after story about how difficult middle school is, especially for girls. Between raging hormones, mean girls, and a much heavier course load, it requires a lot of adjustment. For all of us. It seems XX1 will have to learn organizational skills and focus a lot earlier than I had to. I kind of coasted until I got to college, and then took a direct shot upside the head from the clue bat, when I learned what it took to thrive in a much more competitive environment. She needs to learn that achievement is directly correlated to work and decide how hard she wants to work. She will have to learn to deal with difficult people as well. Too bad it’s not only in middle school that she’ll come across idiots. We all have to learn these lessons at some point. But that’s tomorrow’s problem. I don’t want to think about that stuff right now. Of course life marches on. That’s the way it’s supposed to be. As she goes through the ceremony on Friday I will be one proud father. I hope she’s as proud of herself as we are of her. I will celebrate the passing of one milestone without thinking about the next. I appreciate the person she has become, with a healthy respect for where we’ve been. From first holding her right after her birth, to putting her on that kindergarten bus, to packing her off for sleepaway camp, to now watching her leave elementary school, and everything in between. Steve Miller was right, Time keeps on slippin’ into the future… Every single day. –Mike Photo credits: “Me on graduation day” originally uploaded by judyboo Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Understanding and Selecting Data Masking Introduction Vulnerability Management Evolution Enterprise Features and Integration Evolution or Revolution? Understanding and Selecting DSP Use Cases Incite 4 U Don’t fear the Boobs: About 15+ years ago I was working as a paramedic in New Jersey and volunteered with the local fire department. This was a temporary sojourn back east because I was making $6.25 an hour as a paramedic in Colorado, but could pull down $16 an hour in Jersey. Something about “hazard pay”. Anyway, this particular department had a culture that was both racist and sexist. They refused to authorize ‘females’ to full firefighter status due to concerns that a 120-pound women who ran marathons couldn’t haul their 300-pound asses out of a fire. (I figured it wouldn’t be a problem after enough of the fat melted off.) I won’t lie – I have engaged in locker room talk on more than one occasion, and I recognize that men and women really are different, but I simply don’t understand sexism in the workplace. Jack Daniels wrote a great rant (as usual) on the recent reemergence of sexism and its expression at conferences. There’s no place for this in IT, certainly no place for it in security, and I think it’s largely a lot of dudes with very little self-confidence who are afraid of women. Get over it, lose the ‘bro’ culture, and dump the booth babes. All it reflects is weakness. – RM Firewall dead? Meh. Every couple months somebody proclaims some established control dead. This week’s transgressor is Roger Grimes, who tells us why you don’t need a firewall. Come on, man! Evidently the only attack firewalls can block is buffer overflows, so they are destined for the trash bin. Give me a break. And most traffic comes through port 80 or 443 – but evidently this NGFW thing, with its application awareness, is news to Roger. He points out that firewalls are hard to manage, which is true. And that developers and other folks always push to open up this port or that, basically obviating the security model. That’s not wrong either. But we have been through this before. As Corman says, we never retire controls. Nor should we, as Wendy points out rather effectively. Jody Brazil of Firemon piles on with more reasons it’s a bad idea to kill your firewall. I suspect Grimes gets paid per page view, so maybe he’ll be able to buy a few extra beers this week. But that doesn’t make him right. – MR Tokens <> Tokenization: MasterCard announced their PayPass Wallet Services for mobile devices, an “App designed to complete with PayPal and Google” wallets, or at least that is how the press is describing it. I think this is a pure marketing move to make sure app developers don’t forget MasterCard has a horse in this race. Technically, MasterCard is not offering a wallet

Share:
Read Post

Incite 2/9/2012: Swimming with Sharks

What ever happened to the sit-down family dinner? Maybe it’s just me, but growing up, the only time I really experienced it was watching TV. My Mom worked retail pharmacy, so normally I was pulling something out of the freezer to warm up for my kid brother and myself. And nowadays the only time we sit down for dinner is when we go out to a restaurant. It’s not that we don’t want a sit-down dinner. But we are always carting the kids from one activity to the next, badgering someone to do their homework or get ahead on a project, or maybe letting them play with their friends every so often. We don’t normally stop before 9pm, and that’s on a good day. It is what it is, but I wonder what the impact will be in terms of knowledge transfer. You hear all those high achievers talking about how their parents talked about current events or business or social issues around the dinner table, and that’s how many life lessons were taught. The Boss and I tend to have more one-on-one discussions with the kids about their challenges and interests. I’m all for allowing kids to focus on what they enjoy, but I want to expose them to some of the things I’m passionate about. That’s why we got tickets to the Falcons. By hook or by crook, these kids will be football fans. And I was a little skeptical when the Boss started DVRing “Shark Tank” a few weeks ago. A bunch of rich folks (the ‘sharks’) evaluating business ideas and possibly even investing their own capital. The reality TV aspect made me believe it would be overdramatized and they’d be overly harsh just for ratings. But I gave it a chance because one of the sharks, a guy named Robert Herjavec, was a reseller for CipherTrust back in the day. So I got to tell the kids stories about that crazy Canadian. Truth be told, I was wrong about the show. It was very entertaining, and more importantly it provides a teaching moment for all of us. As you can imagine, I have opinions about pretty much everything. It’s a lot of fun to discuss each of the business ideas, critique their ideas on valuation, pick apart their distribution strategy, and ultimately decide whether that business is a good idea. The best part is the kids got engaged watching. At least for 15-20 minutes, anyway. They are starting to ask good questions. The Boss is now coming up with business ideas almost daily. XX2 seems to have an interest as well. This is a great opportunity to start talking to my family about my other passion: building businesses. Who knows what my kids will end up being or doing? But for them to see entrepreneurs, some with decent ideas, trying to expand their businesses with the passion that only entrepreneurs can muster is terrific. It gives me an opportunity to explain the concepts of raising capital, marketing, selling, distribution, manufacturing, etc. – and they have some concept of what I’m talking about. Maybe they’ll even retain some of this information and pursue some kind of entrepreneurial path. Like their father, their father’s father, and their father’s father’s father before them. Nothing would make me happier. –Mike Photo credits: “Amanda Steinstein swims with the sharks!” originally uploaded by feastoffun.com Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Vulnerability Management Evolution Enterprise Features and Integration Evolution or Revolution? Watching the Watchers (Privileged User Management) Clouds Rolling in Integration Understanding and Selecting DSP Use Cases Incite 4 U Don’t leave home without your security umbrella: As the plumber of Securosis, I get to cover the sexy businesses like AV and perimeter firewalls. Thankfully the NGFW movement has made these boxes a bit more interesting, but let’s be candid – folks want to talk about cloud and data protection, not the plumbing. But as Wendy points out, everyone likes to poke fun at these age-old controls, but it would be a bad idea to retire them – they still block the low-hanging fruit. I love her analogy of an umbrella in a hurricane. You don’t throw out the umbrella because you’ll need to stay dry in a hurricane from time to time. Believe it or not, there are still a lot of successful attackers out there who don’t have to drop zero-day attacks to achieve their missions. These “light drizzle” attackers can be stymied even by basic controls. Obviously you don’t stop with the low bar, but you can’t ignore it either. – MR Build it in or test it out: Part 4 of Fergyl Glynn’s A CISO’s guide to Application Security is live at Threatpost. In this post he discusses technology options for security testing; but the series has been a bit of a disappointment – taking a “test it out” approach to application security rather than “build it in”. With the prevalence of web-based apps today CISOs are more interested in build techniques such as Address Space Layout randomization that make many forms of injection attacks much harder, instead of obfuscation techniques that make reverse engineering distributed code more difficult. Besides, the good hackers don’t really work from source, do they? I’d also suggest security regression tests be included to verify old security defects are not re-introduced – you want to prevent old risks from getting back into the code just as much as “Prevent(ing) the introduction of new risks”. I suspect that Glynn’s focus on measurable reduction of threats/risks/vulnerabilities underserves one of the most effective tactics for application security: threat modeling. We can’t quantify the bugs we don’t have thanks to successful prevention, but you should strive for improvement earlier in the development lifecycle. The series has tended to focus on tools

Share:
Read Post

Incite 5/2/2012: Refi Madness

It all started with an innocent call from my mortgage broker. He started with, “What if I could shave 75 basis points off your note, with no cost to you?” As you might have noticed, I’m a skeptical type of fellow. I asked, “What’s the catch?” He laughed and said, “No catch, I can get you from 4.25% to 3.5% and I’ll pay the costs.” I responded again, “There must be a catch. What am I missing?” He maked some wise remark about Groundhog Day and then told me there really is no catch. I can save a couple hundred bucks a month I’m currently paying the bank. Done. But you see, there was a catch. There is always a catch. The catch this time was having to once again bear witness to the idiocy that is the mortgage process (in the US anyway). So I gather together all the financial information. My broker asks if he needs to send a courier over to pick up the stuff. Nope, an encrypted zip file he can download from Dropbox suffices. That was a lot easier, so maybe it won’t be a total clusterf*** this time. Yeah, I was being too optimistic. I knew things were off the rails last Tuesday when I got copied on an email to my home insurance broker needing a quick verification of the policy ahead of a Friday closing. Uh, what? What Friday closing? Is there a Friday closing? Shouldn’t they have shared that information with me, since I’m pretty sure I have to be there? So we schedule the closing for Friday. About midway through Thursday I get a call asking about a credit inquiry from the idiots who do our merchant account for Securosis. Why they inquired about my personal credit is beyond me, but I had to take some time to fill out their stupid form, explaining that the world consists mostly of idiots and those idiots’ checklists, run personal credit reports for business accounts. Did I mention how much I like checklists? Then I got a call Friday morning. Yes, the day we were supposed to close the note. They need to verify the Boss’s employment. But the Boss works for my company and I’m the managing member and sole officer of my company. They say that’s no good and they need to verify with someone who is not a party to the loan. I respond that there are no officers who aren’t a party to the loan. I figure they understand that and we’re done. Then Rich calls me wondering why he keeps getting calls from a bank trying to verify the Boss’s employment. Argh. I call the bank and explain that the Boss doesn’t work for Securosis and that she works for a separate company (that happens to own a minority share of Securosis). Idiots. At this point, I still haven’t received the settlement statement from the bank. Then I get a call from the closing attorney wondering if I could meet them at a Starbuck’s for the closing Friday night. Sure, but they’ll have to send a babysitter to my house to watch my kids. They didn’t think that was funny. So the lawyer agrees to come to my house to close the note. We go through all the paperwork. I verify that I’m neither committing mortgage fraud nor a terrorist. And yes, I really had to sign papers attesting to both. The lawyer (who does about 15-20 closings a week) can’t recall anyone actually admitting to committing mortgage fraud or being a terrorist, but we sign the documents anyway. And then we are done. Or so we thought. I figured it’s 2012 so I should just wire the money, rather than writing a check to cover the prepaid items like escrow, etc. So I dutifully wake up early Monday morning (in NYC) and log into my bank website to transfer the money. Of course, the web app craps out, I’m locked out of the wire transfer function, and the Boss needs to drop whatever she is doing and head over the bank to wire the money. Yes, that was a pleasant conversation. If getting kicked in the nuts 10 times is your idea of pleasant. But all’s well that ends well. We closed the note and we’ll save a crapload of money over the next 10 years. But man, the process is a mess. These folks give a new meaning to just in time. For those of you looking for someone to manage incidents or fill another role that require an unflappable perspective, maybe check out some of these loan processors. They’d laugh at having to only coordinate legal, forensics, law enforcement, and the ops folks. That would be a day in the park for those folks. Seriously. -Mike Photo credits: “rEEFER mADNESS” originally uploaded by rexdownham Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Vulnerability Management Evolution Enterprise Features and Integration Evolution or Revolution? Watching the Watchers (Privileged User Management) Clouds Rolling In Integration Understanding and Selecting DSP Use Cases Incite 4 U Human context: Great summary by LonerVamp on some of our very own Myrcurial’s thoughts at this year’s Schmoocon. There is a lot of stuff in there and I agree with most of it. But the idea that resonated most was “knowledge of analysts vs. knowledge of tools,” as I had that very conversation with 15 Fortune-class CISOs this week. And there was no contest. These folks have budget for tools, they have budget for people, and they are still losing the battle. They can’t find the right people. The right folks understand how the data applies to their environment. They have context, which tools just can’t provide. No matter what a vendor tells you. – MR State of fear:

Share:
Read Post

Vulnerability Management Evolution: Evolution or Revolution?

We have discussed the evolution of vulnerability management from a tactical tool to a much more strategic platform providing decision support for folks to more effectively prioritize security operations and resource allocation. But some vendors may not manage to effectively broaden their platforms sufficiently to remain competitive and fully satisfy their customer requirements. So at some point you may face a replacement decision, or to put it more kindly, a decision of evolution or revolution for your vulnerability/threat management platform. Last year we researched whether to replace your SIEM/Log Management platform. That research provides an in-depth process for revisiting your requirements, re-evaluating your existing tool, making a decision about whether to replace or not, negotiating the deal, and migrating to the new platform. If and when you face a similar decision regarding your vulnerability management platform the process will be largely the same, so check out that research for detail on the replacement process. The difference is that, unlike SIEM platforms, most organizations are not totally unhappy with their current vulnerability tools. And again, in most cases a revolution decision results from the need to utilize additional capabilities available with a competing platform, instead of because the existing tool simply cannot be made to work. The Replacement Decision Let’s start with the obvious: you aren’t really making a decision on the vulnerability management offering – it’s more of a recommendation. The final decision will likely be made in the executive suite. That’s why your process focuses initially on gathering data (quantitative when possible) – because you will need to defend your recommendation until the purchase order is signed. And probably afterwards, especially if a large ‘strategic’ vendor provides your currently VM scanner. This decision generally isn’t about technical facts – especially because there is an incumbent in play, which may be from a big company with important relationships with heavies in your shop. So to make any change you will need all your ducks in a row and a compelling argument. And even then you might not be able to push through a full replacement. In that case the best answer may be to supplement. In this scenario you still scan with the existing tool, but handle the value-add capabilities (web app scanning, attack path analysis, etc.) on the new platform. The replacement decision can be really broken into a few discrete steps: Introspection: Start by revisiting your requirements, both short and long term. Be particularly sensitive to how your adversaries’ tactics are changing. Unfortunately we still haven’t found a vendor of reliable crystal balls, but think about how your infrastructure is provisioned and will be provisioned (cloud computing). What will your applications look like, and who will manage them (SaaS)? How will you interact with your business partners? Most important, be honest about what you really need. It’s important to make a clear distinction between stuff you must have and stuff that would be nice to have. Everything looks shiny on a marketing spec sheet. That doesn’t mean you’ll really use those capabilities. Current Tool Assessment: Does your current product meet your needs? Be careful to keep emotion out of your analysis – most folks get pissed with their existing vendors from time to time. Do some research into the roadmap of your current vendor. Will they support the capabilities you need in the future? If so, when? Do you believe them? Don’t be too skeptical, but if a vendor has a poor track record of shipping new functionality do factor that in. Alternatives and Substitutions: You should also be surveying the industry landscape to learn about other offerings that might meet your needs. It’s okay to start gathering information from vendors – if a vendor can’t convince you their platform will do what you need they have no shot at actually solving your problem. But don’t stop with vendors. Talk to other folks using the product. Talk to resellers and other third parties who can provide a more objective perspective on the technology. Do your due diligence, because if you push for a revolution it will be your fault if it doesn’t meet expectations. Evaluate the Economics: Now that you know which vendors could meet your requirements, what would it cost to get there? How much to buy the software, or is it a service? How does that compare to your current offering? What kind of concessions can you get from the new player to get in the door, and what will the incumbent do to keep your business? Don’t make the mistake of only evaluating the acquisition cost. You should factor in training, integration, and support costs. And understand that you may need to run both offerings in parallel during a migration period, just to make sure you don’t leave a gap in assessment. Document and Sell: At this point your decision will be clear – at least to you. But you’ll need to document what you want to do and why, especially if it involves bringing in another vendor. Depending on the political situation consensus might be required among the folks affected by the decision. And don’t be surprised by pushback if you decide on replacement. You never know who plays golf with whom, or what other big deals are on the table that could have an impact on your project. And ultimately understand that you may not get what you want. It’s an unfortunate reality of life in the big city. Sometimes decisions don’t go your way – no matter how airtight your case is. That’s why we said earlier that you are really only making a recommendation. Many different factors go into a replacement decision for a key technology, and most of them are beyond your control. If your decision is to stay put and evolve the capabilities of your tool into the platform you need, then map out a plan to get there. When will you add the new features? Then you can map out your budgets and funding requests, and work through

Share:
Read Post

[New White Paper] Watching the Watchers: Guarding the Keys to the Kingdom

Given the general focus on most organizations on the attackers out there, they may miss the attackers that actually have the credentials and knowledge to do some real damage. These are your so-call privileged users and far too many organizations don’t do much to protect themselves from an attack from that community. By the way, this doesn’t necessarily require a malicious insider. Rather it’s very possible (if not plausible) that a privileged user’s device gets compromised, therefore giving the attacker access to the administrator’s credentials. Right, that’s a bad day. Thus we’ve written a paper called Watching the Watchers: Guarding the Keys to the Kingdom to describe the problem and offer some ideas on solutions. A compromised P-user can cause all sorts of damage and so needs to be actively managed. Let’s now talk about solutions. Most analysts favor models to describe things, and we call ours the Privileged User Lifecycle. But pretty as the lifecycle diagram is, first let’s scope it to define beginning and ending points. Our lifecycle starts when the privileged user receives escalated privileges, and ends when they are no longer privileged or leave the organization, whichever comes first. We would like to thank Xceedium for sponsoring the research. Check it out, we think it’s a great overview of an issue facing every organization. At least those with administrators. Download Watching the Watchers: Guarding the Keys to the Kingdom The paper is based on the following posts: Keys to the Kingdom (Introduction) The Privileged User LIfecycle Restrict Access Protect Credentials Enforce Entitlements Monitor Privileged Users Clouds Rolling In Integration Share:

Share:
Read Post

Incite 4/25/2012: Drafty Draft

It feels like Bizarro World to me. I woke up this morning freezing my backside off. We turned off the heat a few weeks ago and it was something like 65 this morning. Outside it was in the 40s, at the end of April. WTF? And the Northeast has snow. WTF? I had to bust out my sweatshirts, which I had hoped to shelve for the season. Again, WTF? But even a draft of cold weather can’t undermine my optimism this week. Why? Because it’s NFL Draft time. That’s right, I made it through the dark time between the start of free agency and the Draft. You know it’s a slow time – I have been watching baseball and even turned on a hockey game. But the drought is over. Now it’s time to see who goes where. And to keep a scorecard of how wrong all the pundits are in their mock drafts. Here’s another thing I learned. There are pundits in every business, and the Internet seems to have enabled a whole mess of people to make their livings as pundits. If you follow the NFL you are probably familiar with Mel Kiper, Jr. (and his awesome hair) and Todd McShay, who man the draft desk at ESPN. They almost always disagree, which is entertaining. And Mike Mayock of NFL Network provides great analysis. They get most of the visibility this week, but through the magic of the Twitter I have learned that lots of other folks write for web sites, some big and most small, and seem to follow the NFL as their main occupation. Wait, what? I try not to let my envy gene, but come on, man! I say I have a dream job and that I work with smart people doing what I really like. But let’s be honest here – what rabid football fan wouldn’t rather be talking football all day, every day? And make a living doing it. But here’s the issue. I don’t really know anything about football. I didn’t play organized football growing up, as my Mom didn’t think fat Jewish kids were cut out for football. And rolling over neighborhood kids probably doesn’t make me qualified to comment on explosiveness, change of direction, or fluid hips. I know very little about Xs and Os. Actually, I just learned that an offensive lineman with short arms can’t play left tackle, as speed rushers would get around him almost every time. Who knew? But I keep wondering if my lack of formal training should deter me. I mean, if we make an analogy to the security business, we have a ton of folks who have never done anything starting up blogs and tweeting. Even better, some of them are hired by the big analyst firms and paraded in front of clients who have to make real decisions and spend real money based on feedback from some punk. To be fair there was a time in my career when I was that punk, so I should know. 20 years later I can only laugh and hope I didn’t cost my clients too much money. Maybe I should pull a Robin Sage on the NFL information machine. That would be kind of cool, eh? Worst case it works and I’ll have a great Black Hat presentation. -Mike Photo credits: “Windy” originally uploaded by Seth Mazow Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember our Heavy RSS Feed, where you can access all our content in its unabridged glory. Vulnerability Management Evolution Core Technologies Value-Add Technologies Enterprise Features and Integration Watching the Watchers (Privileged User Management) Clouds Rolling In Integration Understanding and Selecting DSP Use Cases Malware Analysis Quant Index of Posts Incite 4 U Don’t go out without your raincoat: I tip my hat to the folks at Sophos. To figure out a way to compare the infection rate of Chlamydia to the prevalence of Mac malware is totally evil genius. That stat really resonates with me, and wasn’t a good thing for some of my buddies at school. So do 20% of Macs really have malware? Not exactly – they include the presence of Windows malware, which obviously doesn’t do much harm on Macs. Only 1 in 36 had actual Mac malware, and I’m sure a bunch of those were Flashback users who downloaded AV only after being infected. Though I guess the malware could spread to PCs via VMs and other unsafe computing practices. Of course the Sophos guys never miss an opportunity make an impassioned plea for Mac AV, especially since it’s free. Reminds me of something my Dad said when I came of age. He told me never to go out without my raincoat on. He was right – just ask my fraternity brothers. I wonder if “The Trojan Man for Mac” would work as the new Sophos tagline? – MR Killer apps: Will (Mobile) Apps Kill Websites is Jeff Atwood’s question, one I have been mulling over the last few months. All Jeff’s points are spot-on: Well-designed apps provide a kick-ass user experience that few web sites can rival. Fast, simple, and tailored for the environment, they are often just better. And considering that mobile devices will outnumber desktops 10:1 in the future, replacement is not hard to imagine. But Jeff’s list of disadvantages should contain a few security issues as well. Namely none of the protections I use with my desktop browser (NoScript, Ghostery, Flashblock, Adblock, etc.) are available on mobile platforms. Nor do we have fine-grained control over what apps can do, and we cannot currently run outbound firewalls to make sure websites aren’t secretly transmitting our data. Mobile platforms generally offer really good built-in security, but in practice it is gradually becoming harder to protect – and sandbox – apps, similar to challenges we have already face with desktop browsers. It looks like we get to play security catch-up

Share:
Read Post

Vulnerability Management Evolution: Enterprise Features and Integration

We’re in the home stretch of the Vulnerability Management Evolution research project. After talking mostly about the transition from an audit-centric tactical tool to a much more strategic platform providing security decision support, it is now time to look critically at what’s required to make the platform work in your enterprise. That means providing both built-in tools to help manage your vulnerability management program, as well as supporting integration with existing security and IT management tools. Remember, it is very rare to have an opportunity to start fresh in a green field. So whether you select a new platform or stay with your incumbent provider, as you add functionality you’ll need to play nicely in your existing sandbox. Managing the Vulnerability Management Program We have been around way too long to actually believe that any tool (or toolset) can ever entirely solve any problem, so our research tends to focus around implementing programs to address problems rather than selecting products. Vulnerability management is no different, so let’s list what you need to actually manage the program internally. First you basic information before you can attempt any kind of prioritization. That has really been the focus of the research to date. Taking tactical scans and configuration assessments of the infrastructure and application layers, then combining then with perceived asset value and the value-added technologies we discussed in the last post, and running some analytics to provide usable information. But the fun begins once you have an idea of what needs to be fixed and relative priorities. Dashboards Given the rate of change in today’s organizations, wading through a 200-page vulnerability report or doing manual differential comparisons of configuration files isn’t efficient or scalable. Add in cloud computing and everything is happening even faster, making automation critical to security operations. You need the ability to take information and visualize it in ways that makes sense for a variety of constituencies. You need an Executive View, providing a high-level view of current security posture and other important executive-level metrics. You need an operational view to help guide the security team on what they need to do. And you can probably use views for application-specific vulnerabilities and perhaps infrastructure and database visuals for those folks. Basically you need the flexibility to design an appropriate dashboard/interface for any staffer needing to access the platform’s information. Most vendors ship with a bunch of out-of-the-box options, but more importantly ensure they offer a user-friendly capability to easily customize the interface for what staff needs. Workflow Unless your IT shop is a one-man (or one-woman) band, some level of communication is required to keep everything straight. With a small enough team a daily coffee discussion might suffice. But that doesn’t scale so the vulnerability/threat management platform should include the ability to open ‘tickets’, or whatever you call them, to get work done. It certainly doesn’t need to include a full-blown trouble ticket system, but this capability comes in handy if you don’t have an existing support/help desk system. As a base level of functionality look for the ability to do simple ticket routing, approval / authorization, and indicate work has been done (close tickets). Obviously you’ll want extensive reporting on tickets and the ability to give specific staff members lists of the things they should be doing. Straightforward stuff. Don’t forget that any program needs to have checks and balances, so an integral part of the workflow capability must be enforcement of proper separation of duties to ensure no one individual has too much control over your environment. That means proper authorization before making changes or remediating issues, and ensuring a proper audit trail for everything administrators do with the platform. Compliance Reporting Finally you need to substantiate your controls for the inevitable audits, which means your platform needs to generate documentation to satisfy the auditor’s appetite for information. Okay, it won’t totally satisfy the auditor (as if that were even possible) but at least provide a good perspective on what you do and how well it works, with artifacts to prove it. Since most audits break down to some kind of checklist you need to follow, having those lists enumerated in the vulnerability management platform is important and saves a bunch of time. You don’t want to be mapping reports on firewall configurations to PCI Requirement 1 – the tool should do that out of the box. Make sure whatever you choose offers the reports you need for the mandates you are subject to. But reporting shouldn’t end when the auditor goes away. You should also use the reports to keep everyone operationally honest. That means reports showing similar information to the dashboards we outlined above. You’ll want your senior folks to get periodic reports talking about open vulnerabilities and configuration problems, newly opened attack paths, and systems that can be exploited by the pen test tool. Similarly, operational folks might get reports of their overdue tasks or efficiency reports showing how quickly they remediate their assigned vulnerabilities. Again, look for customization – everyone seems to want the information in their own format. Dashboards and reporting are really the ying/yang of managing any security-oriented program. So make sure the platform provides the flexibility to display and disseminate information however you need it. Enterprise Integration As we mentioned, in today’s technology environment nothing stands alone, so when looking at this evolved vulnerability management platform, how well it integrates with what you already have is a strong consideration. But you have a lot of stuff, right? So let’s prioritize integration a bit. Patch/Config Management: In the value-add technologies piece, we speculated a bit on the future evolution of common platforms for vulnerability/threat and configuration/patch management. As hinted there, tight integration between these two functions is critical. You will probably hear the term vulnerability validation to describe this integration, but it basically means closing the loop between assessment and remediation. So when an issue is identified by the VM platform, the fix is made (presumably by the patch/config tool) and

Share:
Read Post

Vulnerability Management Evolution: Value-Add Technologies

So far we have talked about scanning infrastructure and the application layer, before jumping into some technology decisions you face, such as how to deal with cloud delivery and agents. But as much as these capabilities increase the value of the vulnerability management system, it’s still not enough to really help focus security efforts and prioritize the hundreds (if not thousands) of vulnerabilities or configuration problems you’ll find. So let’s look at a few emerging capabilities that really help make the information gleaned from scans and assessment more impactful to the operational decisions you make every day. These capabilities are not common to all the leading vulnerability management offerings today. But we expect that most (if not all) will be core capabilities of these platforms in some way over the next 2-3 years, so watch for increasing M&A and technology integration for these functions. Attack Path Analysis If no one hears a tree fall in the woods has it really fallen? The same question can be asked about a vulnerable system. If an attacker can’t get to the vulnerable device, is it really vulnerable? The answer is yes, it’s still vulnerable, but clearly less urgent to remediate. So tracking which assets are accessible to a variety of potential attackers becomes critical for an evolved vulnerability management platform. Typically this analysis is based on ingesting firewall rule sets and router/switch configuration files. With some advanced analytics the tool determines whether an attacker could (theoretically) reach the vulnerable devices. This adds a critical third leg to the “oh crap, I need to fix it now” decision process depicted below. Obviously most enterprises have fairly complicated networks, which means an attack path analysis tool must be able to crunch a huge amount of data to work through all the permutations and combinations of possible paths to each asset. You should also look for native support of the devices (firewalls, routers, switches, etc.) you use, so you don’t have to do a bunch of manual data entry – given the frequency of change in most environments, this is likely a complete non-starter. Finally, make sure the visualization and reports on paths present the information in a way you can use. By the way, attack path analysis tools are not really new. They have existed for a long time, but never really achieved broad market adoption. As you know, we’re big fans of Mr. Market, which means we need to get critical for a moment and ask what’s different now that would enable the market to develop? First, integration with the vulnerability/threat management platforms makes this information part of the threat management cycle rather than a stand-alone function, and that’s critical. Second, current tools can finally offer analysis and visualization at an enterprise scale. So we expect this technology to be a key part of the platforms sooner rather than later; we already see some early technical integration deals and expect more. Automated Pen Testing Another key question raised by a long vulnerability report needs to be, “Can you exploit the vulnerability?” Like a vulnerable asset without a clear attack path, if a vulnerability cannot be exploited thanks to some other control or the lack of a weaponized exploit, remediation becomes less urgent. For example, perhaps you have a HIPS product deployed on a sensitive server that blocks attacks against a known vulnerability. Obviously your basic vulnerability scanner cannot detect that, so the vulnerability will be reported just as urgently as every other one on your list. Having the ability to actually run exploits against vulnerable devices as part of a security assurance process can provide perspective on what is really at risk, versus just theoretically vulnerable. In an integrated scenario a discovered vulnerability can be tested for exploit immediately, to either shorten the window of exposure or provide immediate reassurance. Of course there is risk with this approach, including the possibility of taking down production devices, so use pen testing tools with care. But to really know what can be exploited and what can’t you need to use live ammunition. And be sure to use fully vetted, professionally supported exploit code. You should have a real quality assurance process behind the exploits you try. It’s cool to have an open source exploit, and on a test/analysis network using less stable code that’s fine. But you probably don’t want to launch an untested exploit against a production system. Not if you like your job, anyway. Compliance Automation In the rush to get back to our security roots, many folks have forgotten that the auditor is still going to show up every month/quarter/year and you need to be ready. That process burns resources that could otherwise be used on more strategic efforts, just like everything else. Vulnerability scanning is a critical part of every compliance mandate, so scanners have pumped out PCI, HIPAA, SOX, NERC/FERC, etc. reports for years. But that’s only the first step in compliance automation. Auditors need plenty of other data to determine whether your control set is sufficient to satisfy the regulation. That includes things like configuration files, log records, and self-assessment questionnaires. We expect you to see increasingly robust compliance automation in these platforms over time. That means a workflow engine to help you manage getting ready for your assessment. It means a flexible integration model to allow storage of additional unstructured data in the system. The goal is to ensure that when the auditor shows up your folks have already assembled all the data they need and can easily access it. The easier that is the sooner the auditor will go away and lets your folks get back to work. Patch/Configuration Management Finally, you don’t have to stretch to see the value of broader configuration and/or patch management capabilities within the vulnerability management platform. You are already finding what’s wrong (either vulnerable or improperly configured), so why not just fix it? Clearly there is plenty of overlap with existing configuration and patching tools, and you could just as easily make the case that those tools can and should add vulnerability management

Share:
Read Post

Watching the Watchers: Integration

As we wrap up Watching the Watchers it’s worth reminding ourselves of the reality of enterprise security today. Nothing stands alone – not in the enterprise management stack anyway – so privileged user management functions need to play nicely with the other management tools. There are levels of integration required, as some functions need to be attached at the hip, while others can be mere acquaintances. Identity Integration Given that the ‘U’ in PUM stands for user, clearly Identity infrastructure is one of the categories that needs to be tightly coupled. What does that mean? We described the provisioning/entitlements requirement in the Privileged User Lifecycle. But Identity is a discipline itself, so we cannot cover it in real depth in this series. In terms of integration, your PUM environment needs to natively support your enterprise directory. It doesn’t really work to have multiple authoritative sources for users. Privileged users are, by definition, a subset of the user base, so they reside in the main user directory. This is critical, for both provisioning new users and deprovisioning those who no longer need specific entitlements. Again, the PUM Lifecycle needs to enforce entitlements, but the groupings of administrators are stored in the enterprise directory. Another requirement for identity integration is support for two-factor authentication. PUM protects the keys to the kingdom, so if a proxy gateway is part of your PUM installation, it’s essential to ensure a connecting privileged user is actually the real user. That requires some kind of multiple-factor authentication to protect against an administrator’s device being compromised and an attacker thereby gaining access to the PUM console. That would be a bad day. We don’t have any favorites in terms of stronger authentication methods, though we note that most organizations opt for tried-and-true hard tokens. Management Infrastructure Another area of integration is the enterprise IT management stack. You know, the tools that manage data center and network operations. This may include configuration, patching, and performance management. The integration is mostly about pushing alert to an ops console. For instance, if the PUM portal is under a brute force password attack, you probably want to notify ops folks to investigate. The PUM infrastructure also represents devices, so there will be some device health information that could be useful to ops. If a device goes down or an agent fails, alerts should be sent over to the ops console. Finally, you will want to have some kind of help desk integration. Some ops tickets may require access to the PUM console, so being able to address a ticket and close it out directly in the PUM environment could streamline operations. Monitoring Infrastructure The last area of integration will be monitoring infrastructure. Yes, your SIEM/Log Management platform should be the target for any auditable event in the PUM environment. First of all, a best practice for log management is to isolate the logs on a different device to ensure log records aren’t tampered with in the event of a compromise. Frankly, if your PUM proxy is compromised you have bigger problems than log isolation, but you should still exercise care in protecting the integrity of the log files, and perhaps they can help you address those larger issues. Sending events over to the SIEM also helps provide more depth for user activity monitoring. Obviously a key aspect of PUM is privileged user monitoring, but that pertains only when the users access server devices with their enhanced privileges. The SIEM watches a much broader slice of activity which includes accessing applications, email, etc. Don’t expect to start pumping PUM events into the SIEM and fairy dust to start drifting out of the dashboard. You still need to do the work to add correlation rules that leverage the PUM data and update reports, etc. We discuss the process of managing SIEM rule sets fairly extensively in both our Understanding and Selecting SIEM/Log Management and Monitoring Up the Stack papers. Check them out if you want more detail on that process. And with that, we wrap up this series. Over the next few weeks we will package up the posts into a white paper and have our trusty editor (the inimitable Chris Pepper) turn this drivel into coherent copy. Share:

Share:
Read Post

Vulnerability Management Evolution: Core Technologies

As we discussed in the last couple posts, any VM platform must be able to scan infrastructure and scan the application layer. But that’s still mostly tactical stuff. Run the scan, get a report, fix stuff (or not), and move on. When we talk about a strategic and evolved vulnerability management platform, the core technology needs to evolve to serve more than merely tactical goals – it must provide a foundation for a number of additional capabilities. Before we jump into the details we will reiterate the key requirements. You need to be able to scan/assess: Critical Assets: This includes the key elements in your critical data path; it requires both scanning and configuration assessment/policy checking for applications, databases, server and network devices, etc. Scale: Scalability requirements are largely in the eye of the beholder. You want to be sure the platform’s deployment architecture will provide timely results without consuming all your network bandwidth. Accuracy: You don’t have time to mess around, so you don’t want a report with 1,000 vulnerabilities, 400 of them false positives. There is no way to totally avoid false positives (aside from not scanning at all) so accuracy is a key selection criteria. Yes, that was pretty obvious. With a mature technology like vulnerability management the question is less about what you need to do and more about how – especially when positioning for evolution and advanced capabilities. So let’s first dig into the foundation of any kind of strategy platform: the data model. Integrated Data Model What’s the difference between a tactical scanner and an integrated vulnerability/threat management platform? Data sharing, of course. The platform needs the ability to consume and store more than just scan results. You also need configuration data, third party and internal research on vulnerabilities, research on attack paths, and a bunch of other data types we will discuss in the next post on advanced technology. Flexibility and extensibility are key for the data schema. Don’t get stuck with a rigid schema that won’t allow you to add whatever data you need to most effectively prioritize your efforts – whatever data that turns out to be. Once the data is in the foundation, the next requirement involves analytics. You need to set alerts and thresholds on the data and be able to correlate disparate information sources to glean perspective and help with decision support. We are focused on more effectively prioritizing security team efforts, so your platform needs analytical capabilities to help turn all that data into useful information. When you start evaluating specific vendor offerings you may get dragged into a religious discussion of storage approaches and technologies. You know – whether a relational backend, or an object store, or even a proprietary flat file system; provides the performance, flexibility, etc. to serve as the foundation of your platform. Understand that it really is a religious discussion. Your analysis efforts need to focus on the scale and flexibility of whatever data model underlies the platform. Also pay attention to evolution and migration strategies, especially if you plan to stick with your current vendor as they move to a new platform. This transition is akin to a brain transplant, so make sure the vendor has a clear and well-thought-out path to the new platform and data model. Obviously if your vendor stores their data in the cloud it’s not your problem, but don’t put the cart in front of the horse. We will discuss the cloud versus customer premises later in this post. Discovery Once you get to platform capabilities, first you need to find out what’s in your environment. That means a discovery process to find devices on your network and make sure everything is accounted for. You want to avoid the “oh crap” moment, when a bunch of unknown devices show up – and you have no idea what they are, what they have access to, or whether they are steaming piles of malware. Or at least shorten the window between something showing up on your network and the “oh crap” discovery moment. There are a number of techniques for discovery, including actively scanning your entire address space for devices and profiling what you find. That works well enough and tends to be the main way vulnerability management offerings handle discovery, so active discovery is still table stakes for VM offerings. You need to balance the network impact of active discovery against the need to quickly find new devices. Also make sure you can search your networks completely, which means both your IPv4 space and your emerging IPv6 environment. Oh, you don’t have IPv6? Think again. You’d be surprised at the number of devices that ship with IPv6 active by default and if you don’t plan to discover that address space as well, you’ll miss a significant attack surface. You never want to hold a network deployment while your VM vendor gets their act together. You can supplement active discovery with a passive capability that monitors network traffic and identifies new devices based on network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified, but the primary goal of passive monitoring is to find new unmanaged devices faster. Once a new device is identified passively, you could then launch an active scan to figure out what it’s doing. Passive discovery is also helpful for devices that use firewalls to block active discovery and vulnerability scanning. But that’s not all – depending on the breadth of your vulnerability/threat management program you might want to include endpoints and mobile devices in the discovery process. We always want more data, so we are for determining all assets in your environment. That said, for determining what’s important in your environment (see the asset management/risk scoring section below), endpoints tend to be less important than databases with protected data, so prioritize the effort you expend on discovery and assessment. Finally, another complicating factor for discovery is the cloud. With the ability to spin up and take down instances at will, your platform needs to both track and assess

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.