Login  |  Register  |  Contact
Monday, June 21, 2010

FireStarter: Is Full Disk Encryption without Pre-Boot Secure?

By Rich

This FireStarter is more of a real conversation starter than a definitive statement designed to rile everyone up.

Over the past couple months I’ve talked with a few organizations – some of them quite large – deploying full disk encryption for laptops but skipping the pre-boot environment.

For those of you who don’t know, nearly every full drive encryption product works by first booting up a mini-operating system. The user logs into this mini-OS, which then decrypts and loads the main operating system. This ensures that nothing is decrypted without the user’s credentials.

It can be a bit of a problem for installing software updates, because if the user isn’t logged in you can’t get to the operating system, and if you kick off a reboot after installing a patch it will stall at pre-boot. But every major product has ways to manage this. Typically they allow you to set a “log in once” flag to the pre-boot environment for software updates, but there are a couple others ways to deal with it. I consider this problem essentially solved, based on the user discussions I’ve had.

Another downside is that users need to log into pre-boot before the operating system. Some organizations deploy their FDE to require two logins, but many more synchronize the user’s Windows credentials to the pre-boot, then automatically log into Windows (or whatever OS is being protected). Both seem fine to me, and one of the differentiators between various encryption products is how well they handle user support, password changes, and other authentication issues in pre-boot.

But I’m now hearing of people deploying a FDE product without using pre-boot. Essentially (I think) they reverse the process I just described and automatically log into the pre-boot environment, then have the user log into Windows. I’m not talking about the tricky stuff a near-full-disk-encryption product like Credent uses, but skipping pre-boot altogether.

This seems fracking insane to me. You somewhat reduce the risk of a forensic evaluation of the drive, but lose most of the benefits of FDE.

In every case, the reason given is, “We don’t want to confuse our users.”

Am I missing something here? In my analysis this obviates most of the benefits of FDE, making it a big waste of cash.

Then again, let’s think about compliance. Most regulations say, “Thou shalt encrypt laptop drives.” Thus, this seems to tick the compliance checkbox, even if it’s a bad idea from a security perspective.

Also, realistically, the vast majority of lost drives don’t result in the compromise of data. I’m unaware of any non-targeted breach where a lost drive resulted in losses beyond the cost of dealing with breach reporting. I’m sure there have been some, but none that crossed my desk.

—Rich

Return of the Security Start-up?

By Mike Rothman

As Rich described on Friday, he, Adrian, and I were sequestered at the end of last week working on our evil plans for world domination. But we did take some time for meetings, and we met up with a small company, the proverbial “last company standing” in a relatively mature market. All their competitors have been acquired and every deal they see involves competing with a multi-billion dollar public company.

After a few beers, we reminisced about the good old days when it was cool to deal with start-ups. Where the big companies were at a disadvantage, since it was lame to buy from huge monoliths. I probably had dark hair back then, but after the Internet bubble burst and we went through a couple recessions, most end user organizations opt for big and stable vendors – not small and exciting.

This trend was compounded by the increasing value of suites in maturing markets, and most of security has been maturing rapidly. There is no award for doing system integration on the endpoint or the perimeter anymore. It’s just easier to buy integrated solutions which satisfy requirements from a single vendor. Add in the constant consolidation of innovative companies by the security and big IT aggregators, and there has been a real shift away from start-ups.

But there is a downside of this big company reign. Innovation basically stops at big companies because the aggregators are focused on milking the installed base and not necessarily betting the ranch on new features. Most of the big security companies aren’t very good at integrating acquired technology into their stacks either. So you take an exciting start-up, pay them a lot of money, and then let the technology erode as the big company bureaucracy brings the start-up to its knees. A majority of the brain power leaves and it’s a crap show.

Of course, not every deal goes down like this. But enough do that it’s the exception when an acquisition isn’t a total train wreck a year later.

So back to my small company friends. Winning as a small company is all about managing the perception of risk in doing business with them. There is funding/viability risk, as more than a couple small security companies have gone away over the past few years, leaving customers holding the bag. Most big companies take a look at the balance sheet of a start-up and it’s a horror show (at least relative to what they are used to), so the procurement group blows a gasket when asked to write a substantial check to a start-up. There is also technology risk, in that smaller companies can’t do everything so they might miss the next big thing. Small companies need good answers on both these fronts to have any shot of beating a large entrenched competitor. It’s commonly forgotten, but small companies do innovate, and that cliche about them being more nimble is actually true. Those advantages need to be substantiated during the sales cycle to address those risks.

But end users also face risks outside of the control of a small company. Things like acquisition risk, which is the likelihood of the small company being acquired and then going to pot. And integration risk, where the small company does not provide integration with the other solutions the end user needs, and has no resources to get it done. All of these are legitimate issues facing an end user trying to determine the right product to solve his/her problem.

As an end user, is it worth taking these risks on a smaller company? The answer depends on sophistication of the requirement. If the requirement can be met out-of-the box and the current generation of technology meets your needs, then it’s fine to go with the big company. The reality of non-innovation and crappy integration from a big company isn’t a concern. As long as the existing feature set solves your problems, you’ll be OK.

It’s when you are looking at either a less mature market or requirements that are not plain vanilla where the decision becomes a bit murky. Ultimately it rests on your organization’s ability to support and integrate the technology yourself, since you can’t guarantee that the smaller company will survive or innovate for any length of time. But there are risks in working with large companies as well. Don’t forget that acquired products languish or even get worse (relative to the market) once acquired, and the benefits of integration don’t necessarily materialize. So the pendulum swings both ways in evaluating risks relative to procurement.

And you thought risk management was only about dealing with the risk of attack?

There are some tactics end users can use to swing things the right way. Understand that while negotiating the original PO with a small company, you have leverage. You can get them to add features you need or throw in deployment resources or cut the price (especially at the end of the quarter). Once the deal closes (and the check clears), they’ll move onto the next big deal. They have to – the small company is trying to survive. So get what you can before you cut the check.

So back to the topic of this post: are we going to see a return of the security start-up? Can smaller security companies survive and prosper in the face of competition from multi-billion dollar behemoths? We think there is a role for the security start-up, providing innovation and responsiveness to customer needs – something big companies do poorly. But the secret is to find the small companies that act big. Not by being slow, lumbering, and bureaucratic, but by aligning with powerful OEM and reseller partners to broaden market coverage. And having strong technology alliances to deliver a broader product than a small company can deliver themselves.

Yes, it’s possible, but we don’t see a lot of it. There are very few small companies out there doing anything innovative. That’s the real issue. Even if you wanted to work with a small company, finding one that has the right mix of decent product in a growing market, non-horrifying balance sheet and funding prospects, and interesting roadmap is not easy. That’s the real downside of the big company/small company pendulum. For the last few years, fewer and fewer new security companies have been funded (as investors tried to make their existing investments work), and that’s resulted in fewer companies and (much) less innovation.

With the lack of liquidity (no IPO market, few high multiple M&A deals), it’s hard to see how this might change any time soon. VCs won’t jump back in until they think they can make money. There are still a lot of crappy small companies out there trying to get bought, so the buyers can be picky and drive hard bargains. That means end users will be working with bigger companies (with all the heartburn that entails) for the foreseeable future. The market could improve, welcoming small outfits and lots of innovation – it just doesn’t seem likely, at least for a couple of years.

—Mike Rothman

Friday, June 18, 2010

Friday Summary: June 18, 2009

By Rich

Dear Securosis readers,

The Friday Summary is currently unavailable. Our staff is at an offsite in an undisclosed location completing our world domination plans. We apologize for the inconvenience, and instead of our full summary of the week’s events here are a few links to keep you busy. If you need more, Mike Rothman suggests you “find your own &%^ news”.

Mike’s attitude does not necessarily represent Securosis, even though we give him business cards.

Thank you, we appreciate your support, and turn off your WiFi!

Securosis Posts

Other News

—Rich

Thursday, June 17, 2010

DB Quant: Protect Metrics, Part 1, DAM Blocking

By Rich

Now it’s time for the Protect phase, where we start applying database-specific preventative security controls. First up? Back to Database Activity Monitoring… this time in blocking mode.

Our DAM Blocking process is:

  1. Identify
  2. Define
  3. Deploy
  4. Document
  5. Manage

(Manage wasn’t in our original post, but we have added it after additional research and in response to your feedback).

Identify

Variable Notes
Time to identify databases
Time to identify activity to block Some of this assessment occurs in the Planning phase
Cost of DAM blocking tool May already be accounted for

Define

Variable Notes
Time to select blocking method
Time to create rules and policies
Time to specify incident handling and review

Deploy

Variable Notes
Time to integrate blocking
Time to configure and test rules May include time to build behavioral profiles
Time to deploy rules
Time to evaluate effectiveness

Document

Variable Notes
Time to document policies and event handling

Manage

Variable Notes
Time to handle incidents
Time to tune policies

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access.
  30. DB Quant: Monitoring Metrics: Part 2, Database Activity Monitoring
  31. DB Quant: Monitoring Metrics, Part 2, Audit

—Rich

NSO Quant: Manage IDS/IPS Process Map

By Mike Rothman

After posting half of the manage process map (Firewalls) earlier this week, now we move to managing IDS/IPS devices (remember monitoring servers is in scope, but managing servers is not). The first thing you’ll notice is this process is a bit more complicated, mostly because we aren’t just dealing with policies/rules, but also attack signatures and other heuristics used to detect attacks. That adds another layer of information required to build the policies that govern use of the device. So we have expanded the definition of the top area to Content Management, which includes both policies/rules and signatures.

Content Management

In this phase, we manage the content that underpins the IDS/IPS. This includes both attack signatures and the policies/rules control actions triggered by signature matches.

Policy Management Sub-Process

Policy Review

Given the number of potential monitoring and blocking policies available on an IDS/IPS, it’s important to keep the device up to date. Keep in mind the severe performance hit (and false positive issues) of deploying too many policies on each device. It is a best practice to periodically review firewall policy and prune rules that are obsolete, duplicative, risky provide unwanted exposures, or otherwise unneeded. Catalysts for policy review may include signature updates, service requests (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy updates resulting from the operational management of the device (change management process described below).

Define/Update Policies & Rules

Policy HierarchyInvolves defining the depth and breadth of the IDS/IPS policies, including the actions (block, alert, log) taken by the device in the event of a signature (or series of signatures) being triggered. Not that as the capabilities of IDS/IPS devices continue to expand, the term “signature” is generic to matching a specific attack condition. Time limited policies may also be deployed, to activate (or deactivate) certain policies that are short term in nature. Logging, alerting, and reporting policies are also defined in this step.

It’s important here to consider the hierarchy of policies that will be implemented on the devices. The chart at right shows a sample hierarchy with organizational policies at the highest level, which may then be supplemented (or even supplanted) by business unit or geographic policies. Those feed the specific policies and/or rules implemented at each location, which then filter down to a particular device. Designing a hierarchy to properly leverage policy inheritance can either dramatically increase or decrease the complexity of the device’s content.

Initial deployment of the policies should include a Q/A process to ensure none of the rules impacts the ability of critical applications to communicate either internally or externally.

Document Policy Changes

As the planning stage is an ongoing process, documentation is important for operational and compliance purposes. This step lists and details whatever changes have been made to the policies and associated operational standards/guidelines/requirements.

Signature Management Sub-Process

Monitor for Release/Advisory

Identify signatures sources for the devices, and then monitor on an ongoing basis for new signatures. Since attacks emerge on a constant basis; it’s important to follow an ongoing process to keep the IDS/IPS devices current.

Evaluate

Perform the initial evaluation of the signature(s) to determine if it applies within your organization, what type of attack it detects, and if it’s relevant to your environment. This is the initial prioritization phase to determine the nature of the new/updated signature(s), its relevance and general priority for your organization, and any possible workarounds.

Acquire

Locate the signature, acquire it, and validate the integrity of the signature file(s). Since most signatures are downloaded these days, this is to ensure the download completed properly.

Change Management

This phase encompasses additions, deletions, and other changes to the IDS/IPS rules and signatures.

Change Request

Based on either a signature or a policy change within the Content Management process, a change to the IDS/IPS device(s) is requested.

Authorize

Authorization involves ensuring the requestor is allowed to request the change, as well as determining the relative priority of the change to slot into an appropriate change window. Prioritize based on the nature of the signature/policy update and potential risk of the attack occurring. Then build out a deployment schedule based on your prioritization, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if the change involves downtime or changes to application usage.

Test & Assess Impact

Develop test criteria, perform any required testing, analyze the results, and approve the signature/rule change for release once it meets your requirements. Testing should include signature installation, operation, and performance impact. Changes may be implemented in “log-only” mode to understand their impact before approving them for production deployment.

Approve

With an understanding of the impact of the change(s), the request is either approved or denied.

Deploy Change

Prepare the target device(s) for deployment, deliver the change, and install/activate.

Confirm

Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure production systems are not disrupted.

Emergency Update

In some cases, including data breach lockdowns and imminent zero-day attacks, a change to the IDS/IPS signature/policy base must be made immediately. A process to short-cut the full change process should be established and documented, ensuring proper authorization for immediate changes and that they can be rolled back in case of unintended consequences.

Other Considerations

Health Monitoring and Maintenance

This phase involves ensuring the IDS/IPS devices are operational and secure. This includes monitoring the devices for availability and performance. If performance measured here is inadequate, this may drive a hardware upgrade. Additionally, software patches (for either functionality or security) are implemented in this phase. We’ve broken out this step due to the operational nature of the function. This doesn’t relate directly to security or compliance, but can be a significant management cost for these devices, and thus should be modeled separately.

Incident Response/Management

For this Quant project, we are considering the monitoring and management processes as separate, although many organizations (especially managed service providers) consider device management a superset of device monitoring.

So the IDS/IPS management process flow does not include incident investigation, response, validation, or management. Please refer to the monitoring process flow for those activities.


Network Security Operations Quant posts

  1. Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them.
  2. NSO Quant: Monitor Process Map
  3. NSO Quant: Manage Firewall Process Map

—Mike Rothman

Doing Well by Doing Good (and Protecting the Kids)

By Mike Rothman

My kids are getting more sophisticated in their computer usage. I was hoping I could put off the implementation of draconian security controls on their computers for a while. More because I’m lazy and it will dramatically increase the amount of time I spend supporting the in-house computers. But hope is not a strategy, my oldest will be 10 this year, and she is curious – so it’s time.

The first thing I did was configure the Mac’s Parental Controls on the kid’s machine. That was a big pile of fail. Locking down email pretty much put her out of business. All her email went to me, even when I whitelisted a recipient. The web whitelist didn’t work very well either. The time controls worked fine, but I don’t need those because the computer is downstairs. So I turned it off Apple’s Parental Controls.

I did some research into the parental control options out there. There are commercial products that work pretty well, as well as some free stuff (seems Blue Coat’s K9 web filter is highly regarded) that is useful. But surprisingly enough I agree with Ed over at SecurityCurve, Symantec is doing a good job with the family security stuff.

They have not only a lot of educational material on their site for kids of all ages, but also have a service called Norton Online Family. It’s basically an agent you install on your PCs or Macs and it controls web browsing and email, and can even filter outbound traffic to make sure private information isn’t sent over the wire. You set the policies through an online service and can monitor activity through the web site.

It’s basically centralized security and management for all your family computers. That’s a pretty good idea. And from what I’ve seen it works well. I haven’t tightened the controls yet to the point of soliciting squeals from the constituents, but so far so good.

But it does beg the question of why a company like Symantec would offer something like this for free? It’s not like companies like NetNanny aren’t getting consumers to pay $40 for the same stuff. Ultimately it’s about both doing the right thing in eliminating any cost barrier to protecting kids online, and building the Big Yellow brand.

Consumers have a choice with their endpoint security. Yes, the yellow boxes catch your eye in the big box retailers, but ultimately the earlier they get to kids and imprint their brand onto malleable brains, the more likely they are to maintain a favorable place there. My kids see a big orange building and think Home Depot. Symantec hopes they see a yellow box and think Symantec and Internet Security. Though more likely will think: that’s the company that doesn’t let me surf pr0n.

As cynical as I am, I join Ed in applauding Symantec, Blue Coat, and all the other companies providing parental control technology without cost.

—Mike Rothman

DB Quant: Monitoring Metrics, Part 2, Audit

By Rich

Our next step is the Monitor phase is Audit. While monitoring is a real-time activity that typically requires third-party products, auditing typically using native database features; DAM products also offer audit as a core function, but audit is available without them.

Our Audit process is:

  1. Scope
  2. Define
  3. Deploy
  4. Document and Report

Scope

Variable Notes
Time to identify databases
Time to determine audit requirements Some of this assessment occurrs in the Planning phase

Define

Variable Notes
Time to select data collection method
Time to identify users, objects, and transactions to monitor
Time to specify filtering
Cost of storage to support auditing

Deploy

Variable Notes
Time to set up and configure auditing
Time to integrate with existing systems e.g. SIEM, log management
Time to implement log file cleanup

Document and Report

Variable Notes
Time to document
Time to define reports
Time to generate reports Ongoing, depending on reporting cycle

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access
  30. DB Quant: Monitoring Metrics: Part 2, Database Activity Monitoring

—Rich

Wednesday, June 16, 2010

Take Our Data Security Survey & Win an iPad

By Rich

One of the biggest problems in security is that we rarely have a good sense of which controls actually improve security outcomes. This is especially true for newer areas like data security, filled with tools and controls that haven’t been as well tested or widely deployed as things like firewalls.

Thanks to all the great feedback you sent in on our drafts, we are happy to kick off our big data security survey. This one is a bit different than most of the others you’ve seen floating around, because we are focusing more on effectiveness (technically perceived) of controls rather than losses & incidents. We do have some incident-related questions, but only what we need to feed into the effectiveness results.

As with most of our surveys, we’ve set this one up so you can take it anonymously, and all the raw results (anonymized, in spreadsheet format) will be released after our analysis.

Since we have a sponsor for this one (Imperva), we actually have a little budget and will be giving away a 32gb WiFi iPad to a random participant. You don’t need to provide an email address to take the survey, but you do if you want the iPad. If we get a lot of recipients (say over 200) we’ll cough up for more iPads so the odds stay better than the lottery.

Click here to take the survey, and please spread the word. We designed it to only take 10-20 minutes. Even if you aren’t doing a lot with data security, we need your responses to balance the results.

With our surveys we also use something called a “registration code” to keep track of where people found out about it. We use this to get a sense of which social media channels people use. If you take the survey based on this post, please use “Securosis”. If you re-post this link, feel free to make up your own code and email it to us, and we will let you know how many people responded to your referral – get enough and we can give you a custom slice of the data.

Thanks! Our plan is to keep this open for a few weeks.

—Rich

DB Quant: Monitoring Metrics, Part 1, DAM

By Rich

Now that we’ve completed the Secure phase, it’s time to move on to metrics for the Monitor phase. We break this into two parts: Database Activity Monitoring and Auditing.

We initially defined the Database Activity Monitoring process as:

  1. Define
  2. Develop Policies
  3. Deploy
  4. Document

But based on feedback and some overlap with the Planning section, we are updating it to:

  1. Prepare
  2. Deploy
  3. Document
  4. Manage

Prepare

Variable Notes
Cost of DAM tool
Time to identify and profile monitored database Identify the database to monitor and its configuration (e.g., DBMS, platform, connection methods)
Time to define rule set Based on policies determined in the Planning phase. If this wasn’t done during planning, move those metrics into this phase.

Deploy

Variable Notes
Time to deploy DAM tool
Time to configure policies
Time to test deployment

Document

Variable Notes
Time to document activation and deployed rules
Time to record code changes in source control system

Manage

Variable Notes
Time to monitor for policy violations
Time to handle incidents
Time to tune policies

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access
  30. DB Quant: Secure Metrics, Part 4, Shield

—Rich

Incite 6/16/2010: Fenced in

By Mike Rothman

I spent last weekend at my 20th college reunion. I dutifully flew into Ithaca, NY to see many Cornell friends and (fraternity) brothers. It was a great trip, but I did have an experience that reminded me I’m no spring chicken any more.

Now that is some fireworks... I guess I could consider the unbelievable hangover I had on Saturday morning as the first indication that I can’t behave like a 20-year-old and expect no consequences. But it gets better. We were closing da Palms on Saturday night and an undergrad called me over because he had about 3/4 of a pitcher left and graciously asked for some help. I scurried over (because who turns down free beer?) and we started chatting.

So he asked me, “When did you graduate?” I responded that I was Class of 1990. He looked at me cross-eyed and I figured he was just respecting my beer drinking prowess. Not so much. He then said, “Wow. I was born in 1989.” Uh. This kid was crapping his pants when I graduated from college. I literally have T-shirts that are older than this guy. That put everything into perspective: 20 years is a long time.

Of course the campus has changed a lot as well. Lots more buildings, but the biggest change was the ever-present fences. In the last year, there have been numerous suicides on campus. It’s actually very sad that kids today can’t deal with the pressure and have no perspective that whatever it is, and however hard it feels, it will pass. So they jump off any number of bridges overlooking Ithaca’s beautiful gorges. Splat.

So the Cornell administration figured one way to stop the jumpers is to put 10-foot-high fences on all the bridges. It now looks more like a detainment camp than an Ivy League university. That’s sad too. Cornell is one of the most beautiful places I’ve ever been. Now not so much. It’s still a campus, it just feels different.

Being the engineers many of my friends are, we tried to come up with better solutions. The ideas (after a number of beers, as I recall) ranged from a big airbag on the bottom of the gorge to a high speed blower to keep the jumper suspended in air (like those Vegas rides). We also talked about nets and other ideas, of course none really feasible.

I guess I’ll just have to become accustomed to the fences, and remember how things were. With the understanding that like my ability to recover quickly from a night of binge drinking, some things are destined to stay in the past.

– Mike.

Photo credits: “Fenced In” originally uploaded by Mike Rothman


Incite 4 U

  1. Getting to know your local Hoover – No, this isn’t about vacuums, but about getting to know your local law enforcement personnel. It seems the FBI is out there educating folks about how and when to get them involved in breaches. The Bureau is also taking a more proactive stance in sharing information with the financials and other corporates. All this is good stuff, and a key part of your incident response plan needs to be interfacing with law enforcement. So defining your organization’s rules of engagement sooner rather than later is a good thing. – MR

  2. String theory – Kelly Jackson Higgins had the most interesting post of the past week, covering Dan Kaminsky’s announcement of Interpolique. Actually, the story is mostly a pre-announcement for Dan’s Black Hat presentation in Vegas later this summer, but the teaser is intriguing. The tool that Kaminsky is describing would automatically format code – with what I assume is some type of pre-compiler – making it far more difficult to execute injected code via string variables. The only burden on the developer would be to define strings in such a way that the pre-compiler recognizes them and corrects the code prior to compilation/execution. That and remembering to run the tool. This is different than something like Arxan, which acts like a linker after compilation. Philosophically both approaches sound like good ideas. But Interpolique should be simpler to implement and deploy, especially if Recursion Ventures can embed the technology into development environments. Dan is dead right that “… string-injection flaws are endemic to the Web, cross all languages …” – the real question is whether this stops injection attacks across all languages. I guess we have to wait until Black Hat to find out. – AL

  3. Hatfields and McCoys, my ass – Evidently there is a feud between Symantec and McAfee. I guess a VP shot another VP and now the clans have been at war for generations. Computer security changes fundamentally every couple years. And fervent competition is always a good thing for customers. Prices go down and innovation goes up. But to say the AV market is a two-horse race seems wrong. To get back to the Coke vs. Pepsi analogy used in this story, in this market Dr. Pepper and 7Up each have a shot because some customers decide they need a fundamentally different drink. Security is about much more than just the endpoint, and if the Hatfields or McCoys take their eyes off the Microsofts and the HPs, they will end up in the annals of history, like the DECs and the Wangs. – MR

  4. Speed may kill… – Sophos is hoping that the security industry has a short memory. They just announced a ‘Live Protection’ offering in their endpoint suite that uses a cloud service to push signature updates. Right, that’s not novel, but they are using speed as the differentiator. So you can get real-time updates. Of course that assumes you won’t have a Bad DAT(e) try to slip your devices a roofie that renders them useless. Needless to say, there is a bunch of marketing hocus-pocus going on here, since Sophos is also talking about their speed gain resulting from not pushing full signature updates, but doing some analysis in the cloud. Ah, calling Dr. Latency – this is something most other endpoint vendors are already doing. In any case, as our friends from McAfee showed a whole bunch of customers, sometimes it pays to wait a few hours before pushing a signature update. – MR

  5. Fail Whale II, the sequel – If you were on Twitter this morning, you were probably up to your eyeballs in AT&T FAIL on iPhone 4 pre-orders. Yes, the accusation that AT&T was deliberately killing its own site because they ran out of iPhones was funny but untrue! Most people simply could not make it through the session without some form of timeout or “service unavailable” message due to an overburdened (underprovisioned) system. But I was reading on Gizmodo about how user sessions were being compromised and you could randomly access other people’s accounts. With screen shots to prove it! As if AT&T’s reputation were not tarnished enough, their Internet capabilities are as bad as their cell coverage. Then AT&T released a support message saying “We have been unable to replicate the issue …” Awesome! Probably because the support techs were actually inside the firewall, rather than outside, where the thrashing load-balancing routers were spitting out customer data to anyone and everyone visiting the site. Their claim that “information displayed did not include call-detail records, social security numbers, or credit card information” is ridiculous. If they could not reproduce the issue, how could they know that information was not accessible, even if it wasn’t (supposed to be) shown on the pricing page. As with many of the nations banks, “too big to fail” needs to be supplanted with “too F’ed up to fix”. – AL

  6. Skills you can sell – Since I’ve stepped off the corporate ladder, I’m not overly concerned with career management. My primary concern is to make sure that Rich and Adrian don’t walk me to the (virtual) door. But almost everyone else needs to think about what’s next. Dark Reading has a good analysis of what kinds of skills are in demand now, including incident response, compliance, and security clearances for government work. Surprisingly enough application security isn’t at the top of the list, and given the skills gap between the number of qualified folks and the number of exposed apps, that’s strange to me. But I guess apathy isn’t a good hiring manager and clearly there is application security apathy in spades throughout the industry. – MR

—Mike Rothman

Tuesday, June 15, 2010

Need to know the time? Ask the consultant.

By Mike Rothman

You all know the story. If you need to know the time, ask the consultant, who will then proceed to tell you the time from your own watch. We all laugh, but there is a lot of truth in this joke – as there usually is. Consultants are a necessary evil for many of us. We don’t have the leeway to hire full time employees (especially when Wall Street is still watching employee rolls like hawks), but we have too much work to do. So we bring in some temporary help to get stuff done.

I’ve been a consultant, and the Securosis business still involves some project-oriented work. The problem is that most organizations don’t utilize their consultants properly. My thinking was triggered by a post on infoseccynic.com from 2009 (hat tip to infosecisland) that discusses the most annoying consultants.

It’s easy to blame the consultant when things go wrong, and sometimes they are to blame. You tend to run into the dumb, lame, and lazy consultants; and sometimes it’s too late before you realize the consultant is taking you for a ride. Each of the profiles mentioned in the annoying consultant post is one of those. They waste time, they deliberate, and they ride the fence because it usually ends up resulting in more billable hours for them.

Having been on both sides of the fence with consultants, here are a few tips to get the most out of temporary resources.

  1. Scope tightly Like it or not, consultants need to be told what to do. Most project managers suck at that, but then get pissed when the consultant doesn’t read their minds. Going into any project: have a tight scoping document, and a process for changes.
  2. Fixed price – Contracting for a project at a fixed cost will save you a lot of heartburn. There is no incentive for the consultant to take more time if they are paid the same whether the project takes 5 hours or 10. And if you have specified a process for changes, then there are no surprises if/when the scope evolves.
  3. Demand accountability – This gets back to Management 101. Does the consultant do a weekly or daily status report (depending on the project)? Do you read them the riot act when they miss dates? Some consultants will take you for a ride, but only if you let them.
  4. Change the horse – Many project managers are scared to get rid of an underperforming consultant. One of the reasons you got temporary help in the first place is to avoid HR issues if it doesn’t work out. Make sure you have a clear ‘out’ clause in the contract, but if it isn’t working, don’t waste time deliberating – just move on.
  5. Pay for value – Some folks have very specialized skills and those skills are valuable. But the best folks in the world demand a premium because they’ll get the job done better and faster than someone else. Don’t be penny wise and pound foolish. Get the right person and let them do the work – you’ll save a lot in the long term.
  6. Be accountable – Ultimately the success (or failure) of any project lies at the feet of the project manager. It’s about proper scoping, lining up executive support, working the system, lining up the resources, and getting the most out of the project team. When things go wrong, ultimately it’s the project manager’s fault. Don’t point fingers – fix the problem.

So go back and look at the annoying consultant profiles mentioned in the post above. If any of those folks are on your project teams, man (or woman) up and take care of business. As I’ve said a zillion times over the years, I’m not in the excuses business. Neither are you. Consultants are a necessary evil, but they can be a tremendous resource if utilized effectively.

—Mike Rothman

NSO Quant: Manage Firewall Process Map

By Mike Rothman

After posting the monitor process map to define a high-level process for monitoring firewalls, IDS/IPS, and servers; now can we look to the process for managing these devices. In this post we’ll tackle firewalls.

Remember the Quant process depends on you to keep us honest. Our primary research and experience in the trenches gives us a good idea, but there are nuances to fighting these battle every day. So if something seems a bit funky, let us know in the comments.

Keep the philosophy of Quant in mind: the high-level process framework is intended to cover all the tasks involved. That doesn’t mean you need to do everything listed, but this should be a fairly exhaustive list. Individual organizations can then pick and choose the appropriate steps.

When contrasting the monitor process with management, the first thing that becomes apparent is the reality that the policies drive the use of the device(s), but when you need to make a change, the heavy process orientation kicks in. Why? Because making a mistake or unauthorized change can have severe ramifications, like exposing critical data to the entire Internet. Right, that’s bad. So there are a lot of checks and balances in the change management process to ensure any changes are authorized and tested, and won’t create a ripple effect of mayhem.

Policy Management

In this phase, we define what ports, protocols, and (increasingly) applications are allowed to traverse the firewall. Depending on the nature of what is protected and the sophistication of the firewall the policies may include source and destination addresses, application behavior, and user entitlements.

Policy Review

At times a firewall rule set resembles a junk closet. There are lots of things in there, but no one can quite remember what everything is for or who it belongs to. So it is a best practice to periodically review firewall policy and prune rules that are obsolete, duplicative, risky provide unwanted exposures, or otherwise unneeded. Catalysts for policy review may include signature updates (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy updates resulting from the operational management of the device (change management process described below).

Define/Update Policies & Rules

Policy HierarchyThis entails defining the depth and breadth of the firewall policies – including which ports, protocols, and applications are allowed to traverse the firewall. Time-limited policies may also be deployed, to support short-term access for specific applications or user communities. Additionally, the policies vary depending on primary use case, which might include perimeter deployment or network segmentation. Logging, alerting, and reporting policies are also defined in this step.

It’s important here to consider the hierarchy of policies that will be implemented on the devices. The chart at right shows a sample hierarchy with organizational policies at the highest level, which may then be supplemented (or even supplanted) by business unit or geographic policies. Those feed the specific policies and/or rules implemented at each location, which then filter down to a particular device. Designing a hierarchy to properly leverage policy inheritance can either dramatically increase or decrease the complexity of the rule set.

Initial deployment of the firewall policies should include a Q/A process to ensure none of the rules impacts the ability of critical applications to communicate either internally or externally.

Document Policy Changes

As the planning stage is an ongoing process, documentation is important for operational and compliance purposes. This step lists and details whatever changes have been made to the policies.

Change Management

This phase encompasses additions, deletions, and other changes to the firewall rules.

Change Request

Based on the activities in the policy management phase, some type of policy/rule change will be requested.

Authorize

Authorization involves ensuring the requestor is allowed to request the change, as well as determining the relative priority of the change to slot into an appropriate change window. Prioritize based on the nature of the policy update and potential risk of the attack occurring. Then build out a deployment schedule based on your prioritization, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if the change involves downtime or changes to application usage.

Test & Assess Impact

Develop test criteria, perform any required testing, analyze the results, and approve the rule change for release once it meets your requirements. Testing should include monitoring the operation and performance impact of the change on the device. Changes may be implemented in “log-only” mode to understand their impact before approving them for production deployment.

Approve

With an understanding of the impact of the change(s), the request is either approved or denied.

Deploy Change

Prepare the target device(s) for deployment, deliver the change, and install/activate.

Confirm

Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure production systems are not disrupted.

Emergency Update

In some cases, including data breach lockdowns and imminent zero-day attacks, a change to the firewall policy rules must be made immediately. A process to short-cut the full change process should be established and documented, ensuring proper authorization for immediate changes and that they can be rolled back in case of unintended consequences.

Other Considerations

Health Monitoring and Maintenance

This phase involves ensuring the firewalls are operational and secure. This includes monitoring the devices for availability and performance. If performance measured here is inadequate, this may drive a hardware upgrade. Additionally, software patches (for either functionality or security) are implemented in this phase. We’ve broken out this step due to the operational nature of the function. This doesn’t relate directly to security or compliance, but can be a significant management cost for these devices, and thus should be modeled separately.

Incident Response/Management

For this Quant project, we are considering the monitoring and management processes as separate, although many organizations (especially managed service providers) consider device management a superset of device monitoring.

So the firewall management process flow does not include incident investigation, response, validation, or management. Please refer to the monitoring process flow for those activities.

We are looking forward to your comments and feedback. Fire away.


Network Security Operations Quant posts

  1. Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them.
  2. NSO Quant: Monitor Process Map

—Mike Rothman

Top 5 Security Tips for Small Business

By Rich

We in the security industry tend to lump small and medium businesses together into “SMB”, but there are massive differences between a 20-person retail outlet and even a 100-person operation. These suggestions are specifically for small businesses with limited resources, based on everything we know about the latest threats and security defenses.

The following advice is not conditional – there really isn’t any safe middle ground, and these recommendations aren’t very expensive. These are designed to limit the chance you will be hit with attacks that compromise your finances or ability to continue business operations, and we’re ignoring everything else:

  1. Update all your computers to the latest operating systems and web browsers – this is Windows 7 or Mac OS X 10.6 as of this writing. On Windows, use at least Internet Explorer 8 or Firefox 3.6 (Firefox isn’t necessarily any more secure than the latest versions of IE). On Macs, use Firefox 3.6. Most small business struggle with keeping malware off their computers, and the latest operating systems are far more secure than earlier versions. Windows XP is nearly 10 years old at this point – odds are most of your cars are newer than that.
  2. Turn on automatic updates (Windows Update, or Software Update on Mac) and set them to check and automatically install patches daily. If this breaks software you need, find an alternative program rather than turning off updates. Keeping your system patched is your best security defense, because most attacks exploit known vulnerabilities. But since those vulnerabilities are converted to attacks within hours of becoming public (when the patch is released, if not earlier), you need to patch as quickly as possible.
  3. Use a dedicated computer for your online banking and financial software. Never check email on this system. Never use it to browse any Web site except your bank. Never install any applications other than your financial application. You can do this by setting up a non-administrative user account and then setting parental controls to restrict what Web sites it can visit. Cheap computers are $200 (for a new PC) and $700 (for a new Mac mini) and this blocks the single most common method for bad guys to steal money from small businesses, which is compromising a machine and then stealing credentials via a software key logger. Currently, the biggest source of financial losses for small business is malicious software sniffing your online bank credentials, which are then used to transfer funds directly to money mules. This is a better investment than any antivirus program.
  4. Arrange with your bank to require in-person or phone confirmation for any transfers over a certain amount, and check your account daily. Yes, react faster is applicable here as well. The sooner you learn about an attempt to move money from your account, the more likely you’ll be able to stop it. Remember that business accounts do not have the same fraud protections as consumer accounts, and if someone transfers your money out because they broke into your online banking account, it is very unlikely you will ever recover the funds.
  5. Buy backup software that supports both local and remote backups, like CrashPlan. Backup locally to hard drives, and keep at least one backup for any major systems off-site but accessible. Then subscribe to the online backup service for any critical business files. Remember that online backups are slow and take a long time to restore, which is why you want something closer to home. Joe Kissell’s Take Control of Mac OS X Backups is a good resource for developing your backup strategy, even if you are on Windows 7 (which includes some built-in backup features). Hard drives aren’t designed to last more than a few years, and all sorts of mistakes can destroy your data.

Those are my top 5, but here are a few more:

  • Turn on the firewalls on all your computers. They can’t stop all attacks, but do reduce some risks, such as if another computer on the network (which might just mean in the same coffee shop) is compromised by bad guys, or someone connects an infected computer (like a personal laptop) to the network.
  • Have employees use non-administrator accounts (standard users) if at all possible. This also helps limit the chances of those computers being exploited, and if they are, will limit the exploitation.
  • If you have shared computers, use non-administrator accounts and turn on parental controls to restrict what can be installed on them. If possible, don’t even let them browse the web or check email (this really depends on the kind of business you have… if employees complain, buy an iPad or spare computer that isn’t needed for business, and isn’t tied to any other computer). Most exploits today are through email, web browsing, and infected USB devices – this helps with all three.
  • Use an email service that filters spam and viruses before they actually reach your account.
  • If you accept payments/credit cards, use a service and make sure they can document that their setup is PCI compliant, that card numbers are encrypted, and that any remote access they use for support has a unique username and password that is changed every 90 days. Put those requirements into the contract. Failing to take these precautions makes a breach much more likely.
  • Install antivirus from a major vendor (if you are on Windows). There is a reason this is last on the list – you shouldn’t even think about this before doing everything else above.

—Rich

Monday, June 14, 2010

If You Had a 3G iPad Before June 9, Get a New SIM

By Rich

If you keep up with the security news at all, you know that on June 9th the email addresses and the device ICC-ID for at least 114,000 3G iPad subscribers were exposed.

Leaving aside any of the hype around disclosure, FBI investigations, and bad PR, here are the important bits:

  1. We don’t know if bad guys got their hands on this information, but it is safest to assume they did.
  2. For most of you, having your email address potentially exposed isn’t a big deal. It might be a problem for some of the famous and .gov types on the list.
  3. The ICC-ID is the unique code assigned to the SIM card. This isn’t necessarily tied to your phone number, but…
  4. It turns out there are trivial ways to convert the ICC-ID into the IMSI here in the US according to Chris Paget (someone who knows about these things).
  5. The IMSI is the main identifier your mobile operator uses to identify your phone, and is tied to your phone number.
  6. If you know an IMSI, and you are a hacker, it greatly aids everything from location tracking to call interception. This is a non-trivial problem, especially for anyone who might be a target of an experienced attacker… like all you .gov types.
  7. You don’t make phone calls on your iPad, but any other 3G data is potentially exposed, as is your location.
  8. Everything you need to know is in this presentation from the Source Boston conference by Nick DePetrillo and Don Bailey.](http://www.sourceconference.com/bos10pubs/carmen.pdf)

Realistically, very few iPad 3G owners will be subject to these kinds of attacks, even if bad guys accessed the information, but that doesn’t matter. Replacing the SIM card is an easy fix, and I suggest you call AT&T up and request a new one.

—Rich

Friday, June 11, 2010

Insider Threat Alive and Well

By Mike Rothman

Is it me or has the term “insider threat” disappeared from security marketing vernacular? Clearly insiders are still doing their thing. Check out a recent example of insider fraud at Bank of America. The perpetrator was a phone technical support rep, who would steal account records when someone called for help. Awesome.

Of course, the guy got caught. Evidently trying to sell private sensitive information to an undercover FBI agent is risky. It is good to see law enforcement getting ahead of some issues, but I suspect for every one of these happy endings (since no customers actually lost anything) there are hundreds who get away with it. It’s a good idea to closely monitor your personal banking and credit accounts, and make sure you have an identity theft response plan. Unfortunately it’s not if, but when it happens to you.

Let’s put our corporate security hats back on and remember the reality of our situation. Some attacks cannot be defended against – not proactively, anyway. This crime was committed by a trusted employee with access to sensitive customer data. BofA could not do business without giving folks access to sensitive data. So locking down the data isn’t an answer. It doesn’t seem he used a USB stick or any other technical device to exfiltrate the data, so there isn’t a specific technical control that would have made a difference.

No product can defend against an insider with access and a notepad. The good news is that insiders with notepads don’t scale very well, but that gets back to risk management and spending wisely to protect the most valuable assets from the most likely attack vectors. So even though the industry isn’t really talking about insider threats much anymore (we’ve moved on to more relevant topics like cloud security), fraud from insiders is still happening and always will. Always remember there is no 100% security, so revisit that incident response plan often.

—Mike Rothman