Login  |  Register  |  Contact
Friday, February 27, 2015

Summary: You’re a Spy, not a Warrior

By Rich

Rich here.

These days it is hard to swing a cyberstick without hearing a cybergasp of cyberstration at the inevitable cyberbuse of the word “cyber”.

To be clear, I think ‘cybersecurity’ is not only an acceptable term, but a particularly suitable one. It is easy to understand and covers aspects of IT security the term “IT security” doesn’t quite describe as well. There are entire verticals which think of IT security as “the stuff in the office” and use other terms for all the other technology that powers their operations.

But snapping cyber onto the front of another word can be misleading. Take, for example, cyberwar and cyberwarrior.

We are, very clearly, engaged in an ongoing long-term conflict with a myriad of threat actors. And I think there is something that qualifies as cyberwar, and even cyberwarriors. Believe it or not, some people with that skill set work in-theater, under arms, and at risk.

But when you dig in this is more a spy’s game than a warrior’s battlefield. Defensive security professionals are engaged more in counterintelligence and espionage than violent conflict, especially because we can rarely definitively attribute attacks or strike back.

Personally, as Han Solo once said, “Bring ‘em on, I’d prefer a straight fight to all this sneaking around”, but it isn’t actually up to me. So I find I need to think as much in terms of counterintelligence as straight-up defense. That’s why I love some of the concepts in active defense, such as intrusion deception – because we can design traps and misdirection for attackers, giving ourselves a better chance to detect and contain them.

Admit it – you love spy movies. And while you probably won’t get the girl in the end (that’s a joke for whoever saw Kingsman), and you aren’t saving the world, you also probably don’t have to worry about someone sticking bamboo under your fingernails.

Until audit season.

I have some family in town and ran out of time to do a proper summary, so I shortened things this week.

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

—Rich

Wednesday, February 25, 2015

Cracking the Confusion: Encryption Decision Tree

By Rich, Adrian Lane

This is the final post in this series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post, and find the other posts under “related posts” in full article view.

Choosing the Best Option

There is no way to fully cover all the myriad factors in picking a specific encryption option in a (relatively) short paper like this, so we compiled a visual decision tree to at least get you into the right bucket.

Here are a few notes on the decision tree.

  • This isn’t exhaustive but should get you looking at the right set of technologies.
  • In all cases you will want secure external key management.
  • In general, for discreet data you want to encrypt as high in the stack as possible. When you don’t need as much separation of duties, encrypting lower may be easier and more cost effective.
  • For both database and cloud encryption, in a few cases we recommend you encrypt in the application instead.
  • When we list multiple options the order of preference is top to bottom.
  • As you use this tree keep the Three Laws in mind, since they help guide the security value of your decision.

Encryption Decision Tree

Once you understand how encryption systems work, the different layers where you can encrypt, and how they combine to improve security (or not), it’s usually relatively easy to pick the right approach.

The hard part is to then architect and implement the encryption technology and integrate it into your data center, application, or cloud service. That’s where our other encryption research can be valuable, and the following reports should help:

Rich, Adrian Lane

Ticker Symbol: Hack - *Updated*

By Gunnar

There is a ticker symbol HACK that tracks a group of publicly traded “Cyber Security” firms. Given how hot everything ‘Cyber’ is, HACK may do just fine – who knows? But perhaps one for breached companies (BRCH?) would be better. For you security geeks out there who love to talk about the cost of breaches, let’s take a look at the stock prices of several big-named firms which have been breached:

Sony 11/24/14 28.3%
S&P 500 11/24/14 2.2%
 
Home Depot 9/9/14 31.3%
S&P 500 9/9/14 6.4%
 
Target 12/19/13 23.8%
S&P 500 12/19/13 16.9%
 
Heartland 1/20/09 250.1%
S&P 500 1/20/09 162.7%
 
Apple 9/2/14 28%
S&P 500 9/2/14 6%

This is a small sample of companies, but their stock values have each substantially outperformed the S&P 500 (which has been on a tear in the last year or so) from the time of their breaches through now. “How long until activist investors like Icahn pound the table demanding more dividends, stock buy backs and would it kill you to have a breach?” Food for thought.

—Gunnar

Friday, February 20, 2015

Summary: Three Mini Gadget Reviews… and a Big Week for Security Fails

By Rich

Rich here,

Before I get into the cold open for this week, the past few days have been pretty nasty for privacy, security, and the digital supply chain. I will have a post on that up soon, but you can skip to the Top News section to catch the main stories. They are essential reading this week, and we don’t say that often.

I am a ridiculous techno-addict, and have been my entire life. I suspect I inherited it from my father, who brought home an early microwave (likely responsible for my hair loss), video tape deck (where I watched Star Wars before VHS was on the market, the year the movie came out), and even a reel to reel videotape camera (black and white) I used for my own directorial debuts… often featuring my Star Wars figures.

Gadgets have always been one of my vices, but as I have grown older they not only got cheaper, but also cheaper than what many of my 40+-year-old peers spend money on (cars, extra houses, extramarital partners for said houses, etc. ). That said, over time I have become a bit more discerning about where I drop money as I have come to better understand my own tastes and needs… and as my kids killed any semblance of hobby time.

For this week’s Summary I thought I’d highlight a few of my current favorite gadgets. This isn’t even close to exhaustive – just a few current favorites.

Logitech Harmony Ultimate Home + Hub – I don’t actually have all that crazy a TV setup, but it’s just complex enough that I wanted a universal remote. We switch a ton between our Apple TV and TiVo Roamio, and our kids are so that young regular remotes are a mess.

The Harmony Ultimate is exactly what the name says. The remote itself is relatively small and has an adaptive touch screen that configures itself to the activity you are in. While it has an infrared transmitter like all remotes, it really uses RF to communicate to the Hub, which is located in our AV cabinet under the TV, and includes an IR blaster to hit all the components.

This setup brings three key advantages. First, you don’t need to worry about where to point the remote. My kids would always lose aim in the middle of a multi-component command (something as simple as turning things on or off) and get frustrated. That’s no longer an issue. Second, the touch screen itself makes a cleaner remote with less buttons. You can prioritize the ones you use on the display, but still access all the obscure ones. Finally, the Hub is network enabled, and pairs with an iOS app. If I can’t find the remote I use my phone and everything looks and works the same. Because children.

I have used earlier Logitech remotes and this is the first one that really delivers on all the promises. It is pricy, but futureproof, and even integrates with home automation products. I also got $80 off during a random Amazon sale. There isn’t anything else like this on the market, and I don’t regret it. We used our last Harmony remote for 7 years with our main TV, and it’s now in another room, so we got our money’s worth.

Garmin Forerunner 920XT – I’m a triathlete. Not a great one by any means, but that’s my sport of choice these days. The Garmin 920XT was my holiday present this year, and it changed how I think about smartwatches.

First, as a fitness tool, it is ridiculous. Aside from the GPS (and GLONASS – thank you, Russian friends), it connects with a ton of sensors, works as a basic smartwatch, and even includes an accelerometer – not only for step tracking, but also run tracking on treadmills and swim stroke tracking in pools.

I didn’t expect to wear it every day but I do. Even getting simple notifications on my wrist means less pulling my phone out of my pocket, and I don’t worry about missing calls when I chase the kids during the work day and leave my phone on my desk. Yes, I’ll switch to an Apple Watch day-to-day when it comes out, but I went on a 17-mile run during working hours this week, and knowing I didn’t miss anything important was liberating.

The 920XT is insane as a fitness tool. It will estimate your VO2 Max and predict race performance based on heart rate variability. It pulls in more metrics than you knew existed (or can use, but it makes us geeks happy). You can expand it with Garmin’s new ConnectIQ app platform. I added a half-marathon race predictor for my last race, and it helped me set a new PR – I am not great at math in the middle of a race. It walks me through structured workouts, then automatically uploads everything via my phone or home WiFi when I’m done, which then syncs to Strava and TrainingPeaks.

If you aren’t a multisport athlete I’d check out the Fenix 3 or Vivoactive. They both support ConnectIQ.

Neato XV-11 Robotic Vacuum – With multiple cats and allergies I was an early Roomba user. It worked well but had some key annoyances. It nearly never found its base to recharge, I’d have to remember to use the “virtual wall” infrared barriers to keep it in a room, and it was a royal pain to clean.

Then I switched to the Neato XV-11 (an older model). It uses a stronger vacuum than the Roomba, is much easier to clean, maps rooms with LIDAR (laser radar), and nearly always finds its base to recharge. It is also much easier to schedule.

The Neato will scan a room, clean until the battery gets low, go back to base, recharge, and then start out again up to 3 times (when it’s running on a schedule). It detects doorways automatically, stays in the room you put it in, and will only hit the next room when it is done.

On the downside I cannot use it on a schedule any more because my cats vomit too much and I don’t want to gum it up. But I still vacuum several more times a week than I would by hand – I pull it out, scan the room for cat puke, move a few dirty socks, and let ‘er rip.

That’s it for this week. Three items I use nearly every day that have nothing to do with Securosis or Apple.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Adrian Lane: Reverse Engineering Apple’s Lightning Connector. To me hacking is about understanding how stuff really works, and modifying it to suit your needs. For good usually, but I understand there are two sides to that coin. And that’s one of the reasons I love Hack-a-day and articles like this – figuring out how the Lightning connector works.
  • Mortman: HTTP/2 is out!
  • Mike: Emerging Products: Threat Intelligence Group Test. This is why we can’t have nice things. I’m old, but I remember when product reviews were actually helpful. At least they provide a short list of products to look at. So there’s that…
  • Dave Lewis: Superfish. Let us know if any of your corporate Lenovos came with this, but we assume all corporate laptops are wiped and get a standard image installed.
  • Rich: How Spies Stole the Keys to the Encryption Castle. As I keep hinting, I need to write this all up tomorrow.

Research Reports and Presentations

Top News and Posts

As I mentioned in the opening, there are some major privacy and security stories this week. Dave Lewis highlighted Superfish, and here are the other main stories you need to read:

And some other stories:

Blog Comment of the Week

This week’s best comment goes to will, in response to Some days, I think we are screwed.

People tend to be stupid, so the smart ones must protect them from themselves :)

—Rich

Cracking the Confusion: Top Encryption Use Cases

By Rich

This is the sixth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Top Encryption Use Cases

Encryption, like most security, is only adopted in response to a business need. It may be a need to keep corporate data secret, protect customer privacy, ensure data integrity, or satisfy a compliance mandate that requires data protection – but there is always a motivating factor driving companies to encrypt. The principal use cases have changed over the years, but these are still common.

Databases

Protecting data stored in databases is a top use case across mainframes, relational, and NoSQL databases. The motivation may be to combat data breaches, keep administrators honest, support multi-tenancy, satisfy contractual obligations, or even comply with state privacy laws. Surprisingly, database encryption is a relatively new phenomenon. Database administrators historically viewed encryption as carrying unacceptable performance overhead, and data security professionals viewed it as a redundant control – only effective if firewalls, identity management, and other security measures all failed. Only recently has the steady stream of data breaches shattered this false impression. Combined with continued performance advancements, multiple deployment options, and general platform maturity, database encryption no longer carries a stigma. Today data sprawls across hundreds of internal databases, test systems, and third-party service providers; so organizations use a mixture of encryption, tokenization, and data masking to tailor protection to each potential threat – regardless of where data is moved and used.

The two best options for encrypting a database are encrypting data fields in the application before sending to the database and Transparent Database Encryption. Some databases support field-level encryption, but the primary driver for database encryption is usually to restrict database administrators from seeing specific data, so organizations cannot rely on the database’s own encryption capabilities.

TDE (via the database feature or an external tool) is best to protect this data in storage. It is especially useful if you need to encrypt a lot of data and for legacy applications where adding field encryption isn’t reasonable.

For more information see Understanding and Selecting a Database Encryption or Tokenization Solution.

Cloud Storage

Encryption is the main data security control for cloud computing. It enables organizations to maintain control over data security, even in multitenant environments. If you encrypt data, and control the key, even your cloud provider cannot access it.

Unfortunately cloud encryption is generally messy for SaaS, but there are decent options to integrate encryption into PaaS, and excellent ones for IaaS. The most common use cases are encrypting storage volumes associated with applications, encrypting application data, and encrypting data in object storage. Some cloud providers are even adding options for customers to manage their own encryption keys, while the provider encrypts and decrypts the data within the platform (we call this Bring Your Own Key).

For details see our paper on Defending Cloud Data with Infrastructure Encryption.

Compliance

Compliance is a principal driver of encryption and tokenization sales. Some obligations, such as PCI, explicitly require it, while others provide a “safe harbor” provision in case encrypted data is lost. Typical policies cover IT administrators accessing data, users issuing ad hoc queries, retrieval of “too much” information, or examination of restricted data elements such as credit card numbers. So compliance controls typically focus on issues of privileged user entitlements (what users can access), segregation of duties (so admins cannot read sensitive data), and the security of data as it moves between application and database instances. These policies are typically enforced by the applications which process users requests, limiting access (decryption) according to policy. Policies can be as simple as allowing only certain users to see certain types of data. More complicated policies build in fraud deterrence, limit how many records specific users are allowed to see, and shut off access entirely in response to suspicious user behavior. In other use cases, where companies move sensitive data to third-party systems they do not control, data masking and tokenization have become popular choices for ensuring sensitive data does not leave the company at all.

Payments

The payments use case deserves special mention; although commonly viewed as an offshoot of compliance, it is more a backlash – an attempt to avoid compliance requirements altogether. Before data breaches it was routine to copy payment data (account numbers and credit card numbers) anywhere they could possibly be used, but now each copy carries the burden of security and oversight, which costs money. Lots of it. In most cases payment data was not required, but the usage patterns based around it became so entrenched that removal would break applications. For example merchants do not need to store – or even see – customer credit card numbers for payment, but many of their IT systems were designed around credit card numbers.

In the payment use case, the idea is to remove payment data wherever possible, and thus the threat of data breach, thus reducing audit responsibility and cost. Here tokenization, format-preserving encryption, and masking have come into their own: removing sensitive payment data, and along with it most need for security and compliance. Industry organizations like PCI and regulatory bodies have only recently embraced these technical approaches for compliance scope reduction, and more recent variants (including Apple Pay merchant tokens) also improve user data privacy.

Applications

Every company depends on applications to one degree or another, and these applications process data critical to the business. Most applications, be they ‘web’ or ‘enterprise’, leverage encryption. Encryption capabilities may be embedded in the application or bundled with the underlying file system, storage array, or relational database system.

Application encryption is selected when fine-grained control is needed, to encrypt select data elements, and to only decrypt information as appropriate for the application – not merely because recognized credentials were provided. This granularity of control comes at a price – it is more difficult to implement, and changes in usage policies may require application code changes, followed by extensive validation and testing.

The operational costs can be steep, but this level of security is essential for some applications – particularly financial and payment applications. For other types of applications, simply protecting data “at rest” (typically in files or databases) with transparent encryption at the file or database layer is generally sufficient.

—Rich

Wednesday, February 18, 2015

Cracking the Confusion: Additional Platform Features and Options

By Rich

This is the fifth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Additional Platform Features and Options

The encryption engine and the key store are the major functional pieces in any encryption platform, but there are supporting systems with any data center encryption solution that are important for both overall management, as well as tailoring the solution to fit within your application infrastructure. We frequently see the following major features and options to help support customer needs:

Central Management

For enterprise-class data center encryption you need a central location to define both what data to secure and key management policies. So management tools provide a window onto what data is encrypted and a place to set usage policies for cryptographic keys. You can think of this as governance of the entire crypto ecosystem – including key rotation policies, integration with identity management, and IT administrator authorization. Some products even provide the ability to manage remote cryptographic engines and automatically apply encryption as data is discovered. Management interfaces have evolved to enable both security and IT management to set policy without needing cryptographic expertise. The larger and more complex your environment, the more critical central management becomes, to control your environment without making it a full-time job.

Format Preserving Encryption

Encryption protects data by scrambling it into an unreadable state. Format Preserving Encryption (FPE) also scrambles data into an unreadable state, but retains the format of the original data. For example if you use FPE to encrypt a 9-digit Social Security Number, the encrypted result would be 9 digits as well. All commercially available FPE tools use variations of AES encryption, which remains nearly impossible to break, so the original data cannot be recovered without the key. The principal reason to use FPE is to avoid re-coding applications and re-structuring databases to accommodate encrypted (binary) data. Both tokenization and FPE offer this advantage. But encryption obfuscates sensitive information, while tokenization removes it entirely to another location. Should you need to propagate copies of sensitive data while still controlling occasional access, FPE is a good option. Keep in mind that FPE is still encryption, so sensitive data is still present.

Tokenization

Tokenization is a method of replacing sensitive data with non-sensitive placeholders: tokens. Tokens are created to look exactly like the values they replace, retaining both format and data type. Tokens are typically ‘random’ values that look like the original data but lack intrinsic value. For example, a token that looks like a credit card number cannot be used as a credit card to submit financial transactions. Its only value is as a reference to the original value stored in the token server that created and issued the token. Tokens are usually swapped in for sensitive data stored in relational databases and files, allowing applications to continue to function without changes, while removing the risk of a data breach. Tokens may even include elements of the original value to facilitate processing. Tokens may be created from ‘codebooks’ or one time pads; these tokens are still random but retain a mathematical relationship to the original, blurring the line between random numbers and FPE. Tokenization has become a very popular, and effective, means of reducing the exposure of sensitive data.

Masking

Like tokenization, masking replaces sensitive data with similar non-sensitive values. And like tokenization masking produces data that looks and acts like the original data, but which doesn’t pose a risk of exposure. But masking solutions go one step further, protecting sensitive data elements while maintaining the value of the aggregate data set. For example we might replace real user names in a file with names randomly selected from a phone directory, skew a person’s date of birth by some number of days, or randomly shuffle employee salaries between employees in a database column. This means reports and analytics can continue to run and produce meaningful results, while the database as a whole is protected. Masking platforms commonly take a copy of production data, mask it, and then move the copy to another server. This is called static masking or “Extract, Transform, Load” (ETL for short).

A recent variation is called “dynamic masking”: masks are applied in real time, as data is read from a database or file. With dynamic masking the original files and databases remain untouched; only delivered results are changed, on-the-fly. For example, depending on the requestor’s credentials, a request might return the original (real, sensitive) data, or a masked copy. In the latter case data is dynamically replaced with a non-sensitive surrogate. Most dynamic masking platforms function as a ‘proxy’ something like firewall, using redaction to quickly return information without exposing sensitive data to unauthorized requesters. Select systems offer more intelligent randomization, tokenization, or even FPE.

Again, the lines between FPE, tokenization, and masking are blurring as new variants emerge. But tokenization and masking variants offer superior value when you don’t want sensitive data exposed but cannot risk application changes.

—Rich

Tuesday, February 17, 2015

Cracking the Confusion: Key Management

By Rich

This is the fourth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Key Management Options

As mentioned back in our opening, the key (pun intended – please forgive us) to an effective and secure encryption system is proper placement of the components. Of those the one that most defines the overall system is the key manager.

You can encrypt without a dedicated key manager. We know of numerous applications that take this approach. We also know of numerous applications that break, fail, and get breached. You will nearly always want to use a dedicate key management option, which breaks down into four types:

The first thing to consider is how to deploy external key management. There are four options:

  • An HSM or other hardware key management appliance. This provides the highest level of physical security. It is the most common option in sensitive scenarios, such as financial services and payments. The HSM or appliance runs in your data center, and you always want more than one for backup. Lose access and you lose your keys. Apple, for example, has stated publicly that they physically destroy the administrative access smart cards after configuring a new appliance so no one can ever access and compromise the keys, which are destroyed if someone tries to open the housing or certain other access methods. A hardware root of trust is the most secure option, and all those products also include hardware acceleration for cryptographic operations to improve performance.
  • A key management virtual appliance. A vendor provides a pre-configured virtual appliance (instance) for you to run where you need it. This reduces costs and increases deployment flexibility, but isn’t as secure as dedicated hardware. If you decide to go this route, use a vendor who takes exceptional memory protection precautions, because there are known techniques for pulling keys from memory in certain virtualization scenarios. A virtual appliance doesn’t offer the same physical security as a physical server, but they do come hardened, and support more flexible deployment options – you can run them within a cloud or virtual datacenter. Some systems also allow you to use a physical appliance as the hardware root of trust for your keys, but then distribute keys to virtual appliances to improve performance in distributed scenarios (for virtualization or simply cost savings).
  • Key management software, which can run either on a dedicated server or within a virtual/cloud server. The difference between software and a virtual appliance is that you install the software yourself rather than receiving a hardened and configured image. Otherwise software offers the same risks and benefits as a virtual appliance, assuming you harden the server as well as the virtual appliance.
  • Key Management Software as a Service (SaaS). Multiple vendors now offer key management as a service specifically to support public cloud encryption. This also works for other kinds of encryption, including private clouds, but most usage is for public clouds.

Client Access Options

Whatever deployment model you choose, you need some way of getting keys where they need to be, when they need to be there, for cryptographic operations.

Clients (whatever needs the key) usually need support for the following core functions for a complete key management lifecycle:

  • Key generation
  • Key exchange (gaining access to the key)
  • Additional key lifecycle functions, such as expiring or rotating a key

Depending on what you are doing, you will allow or disallow these functions under different circumstances. For example you might allow key exchange for a particular application, but not allow it any other management functions (such as generation and rotation).

Access is managed one of three ways, and many tools support more than one:

  • Software Agent: A dedicated agent handles client key functions. These are generally designed for specific use cases – such as supporting native full disk encryption, specific backup software, various database platforms, and so on. Some agents may also perform cryptographic functions for additional hardening, such as wiping the key from memory after each use.
  • Application Programming Interfaces: Many key managers are used to handle keys from custom applications. An API allows you to access key functions directly from application code. Keep in mind that APIs are not all created equal – they vary widely in platform support, programming languages supported, simplicity or complexity of API calls, and the functions accessible via the API.
  • Protocol & Standards Support: The key manager may support a combination of proprietary and open protocols. Various encryption tools support their own protocols for key management, and like software agents, the key manager may include support – even if it is from a different vendor. Open protocols and standards are also emerging but not yet in wide use, and may be supported.

We have written a lot about key management in the past. To dig deeper take a look at Pragmatic Key Management for Data Encryption and Understanding and Selecting a Key Management Solution.

—Rich

Monday, February 16, 2015

Some days, I think we are screwed

By Rich

I meant to write about this earlier and forgot. Last week I was listening to the Diane Rehm show on NPR while out for a long run (I am weird and prefer talk radio/podcasts on long workouts). The show was all about cybersecurity. To be honest, the panel was a bit weak (Ravi Pendse from Brown was decent).

When they opened up the phone lines, as you would expect, a lot of consumers called in. I will paraphrase one call a bit..

I don’t really get what the big deal is. If someone uses my Social Security number all I need to do is call my bank and clean it up.

This was prefaced by:

I’ve worked on process control systems for over 20 years, like water treatment and other utilities.

Even the panel had a hard time responding.

(Sorry I don’t have a transcript.)

—Rich

Firestarter: Cyber!!!

By Rich

Last week President Obama held a cybersecurity summit out in the Bay Area. He issued a new executive order and is standing up a new threat sharing center. This is in response to ongoing massive attacks such as the Anthem breach and (as we heard this weekend) hundreds of millions stolen in bank thefts. But what does it all mean to security pros and the industry? The truth is, not much in our day-to-day (yet), but you certainly had better pay attention.

Watch or listen:


—Rich

Friday, February 13, 2015

Friday Summary: February 13, 2015

By Adrian Lane

Welcome to the Friday the 13th edition of the Friday Summary! It has been a while since I wrote the summary so there is lots to cover …

My favorite external post this week is a research paper, Mongo Databases At Risk, outlining a very common trend among MongoDB users: not using basic user authentication to secure their databases. Well, that, and putting them on the Internet. On the default port. Does this sound like SQL Server circa 2003 to anyone else?

One angle I found important was the number of instances discovered: nearly 40k databases. That is a freakin’ lot! Remember, this is MongoDB. And just those running on the Internet at the default port. Yes, it’s one of the top NoSQL platforms, but during our inquiries we spoke with 4 Hadoop users for every MongoDB user. MongoDB was also behind Hadoop and Cassandra. I don’t know if anyone publishes download or usage numbers for the various platforms, but extrapolating from those numbers, there are a lot of NoSQL databases in use. Someone with more time on their hands might decide to scan the Internet for instances of the other platforms (the default port for Hadoop, Cassandra, CouchDB, and Redis is 6380; RIAK is 8087). I would love to know what you find.

Back to security… I have had conversations with several firms trying to figure out how to monitor NoSQL usage; we know how to apply DAM principles to SQL, but MapReduce and other types of queries are much more difficult to parse. I expect several vendors to introduce basic monitoring for Hadoop in the next year, but it will take time to mature, and even more to cover other platforms. What I haven’t heard discussed is the easier – and no less pressing – issue of configuration and vulnerability assessment. The enterprise distributions are providing best practices but automated scans are scarce – and usually custom. That is a free hint for any security vendors looking for a quick way to help big data customers get secure.


Mobile security consumes much more of my time than it should. I geek out on it, often engaging Gunnar in conversation on everything from the inner workings of secure elements to the apps that make payments happen. And I read everything I can find. This week I ran across Why Banks Will Win the Battle for the Mobile Wallet, by John Gunn – the guy who runs the wonderfully helpful twitter feed @AuthNews. But on this I think he has missed the point. Banks are not battling to win mobile wallets. In fact those I have spoken with don’t care about wallets. They care about transactions. And moving more transactions from cash to credit means a growing stream of revenue for merchant banks and payment processors, which makes them very happy.

Wallets in and of themselves don’t fosters adoption – as Google is well aware – and in fact many users don’t really trust wallets at all. What gets people to move from a plastic card or cash, at least in the US, is a combination of convenience and trust. Starbucks leveraged their brand affinity into seven million subscribers for their app and an impressive 2.1 million transactions per week. Banks benefit directly when more transactions move away from cash, and they are happy to let others own the user experience.

But things get really interesting in overseas markets, which make US adoption of mobile payments look like a payments backwater. Nations without traditional banking or payment infrastructure can now move money in ways they previously could not, so adoption rates have been huge. Leveraging cellular infrastructure makes it faster and safer to move money, with fewer worries about carrying cash. Nations like Kenya – which is not often considered on the cutting edge of technology, but had 25 million mobile payment users and moved $26 billion in 2014 via mobile payments and mobile money subscriptions. Sometimes technology really does make the world a better place. The banks don’t care which wallets, apps, technology, or carriers wins – they just want someone to make progress.


In January I normally publish my research calendar for the coming year. But Rich has been hogging the Friday Summary for weeks now, so I finally get a chance to talk about what I am seeing and doing.

  1. Tokenization: I am – finally – going to publish some thoughts on the latest trends in tokenization. I want to talk about changes in the technology, adoption on mobile platforms, how the latest PCI specification is changing compliance, and some customer user cases.
  2. Risk-Based Authentication and Authorization: We see many more organizations looking at risk-based approaches to augment the security of web-based applications. Rather than rewrite applications they use metadata, behavioral information, business context, and… wait for it… big data analytics to better determine the acceptability of a request. And it is often cheaper and easier to bolt this on externally than to change applications. Gunnar and I have wanted to write this paper for a year, and now we finally have the time.
  3. Building a Security Analytics Platform: I have been briefed by many of security analytics startups, and each is putting together some basic security analysis capabilities, usually built on big data databases. I have, in that same period, also spoken with many large enterprises who decided not to wait for the industry to innovate, and are building their own in-house systems. The last couple even asked me what I thought of certain architectural choices, and which data elements should they use as hash keys! So there is considerable demand for consumer education; I will cover off-the-shelf and DIY options.

I am still on the fence about some secure code development ideas, so if you have an idea, let’s talk. Even the security vendors I have visited in the last year have pulled me aside to ask about transitioning to Agile, or how to fix a failed transition to Agile. Most want to know what this whole DevOps thing is about. I have got a few ideas, and there is broad interest from end users and software vendors alike, so this is on the docket but not yet fully defined. Let me know if you have ideas… on any of these.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Mortman: Let’s Discuss Zero-Knowledge Data. Craptacular!
  • Adrian Lane: MongoDB database at Risk (PDF). Nice writeup by Jens Heyens, Kai Greshake, and Eric Petryka on misconfigured MongoDB databases sitting on default ports with mongo shell set not to require user credentials. They provide one workaround but you can also enable access controls, change the port temporarily, or disable external Internet access. Another interesting note: they found ~40k instances of MongoDB – not counting Hadoop or Cassandra. Who said big data was a fad?
  • Mike Rothman: Find Improvements That Lie Clearly At Hand. Our own GP argues that it’s better to find quick, dirty, and cheap ways to improve security than to try for perfection. A perfect sentiment. Not too many fields need to embrace such abstract concepts as infosec. Software is abstract to begin with; layer humans’ difficulty grasping risk on top of that, and information security has to climb two mountains. Believe it or not, infosec people can learn some things from developers. For better and worse, Agile projects ship code. Developers have clearly embraced Thomas Carlyle: “Our main business is not to see what lies dimly at a distance, but to…”

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s best comment goes to Kamal Govindaswamy, in response to Even if Anthem Had Encrypted, It Probably Wouldn’t Have Helped.

Rich,

Nice article. Thank you!

A question on your statement around DAM and 2FA not being effective as well. I am curious as to your thoughts on how they could be ineffective against a persistent actor. I can think of a scenario or two but am interested in your thoughts, wherher/how they would be bypassed, compromised etc.

Thanks!

—Adrian Lane

Thursday, February 12, 2015

Cracking the Confusion: Encryption Layers

By Adrian Lane, Rich

Picture enterprise applications as a layer cake: applications sit on databases, databases on files, and files are mapped onto storage volumes. You can use encryption at each of these layers in your application stack: within the application, in the database, on files, or on storage volumes. Where you use an encryption engine dominates security and performance. Higher up the stack can offer more security, with higher complexity and performance cost.

There is a similar tradeoff with encryption engine and key manager deployments: more tightly coupled systems offer less complexity, but less security and reliability. Building an encryption system requires a balance between security, complexity, and performance. Let’s take a closer look at each layer and their tradeoffs.

Application Encryption

One of the more secure ways to encrypt application data is to collect it in the application, send it to an encryption server or appliance (an encryption library embedded in the application), and then store the encrypted data in a separate database. The application has full control over who sees what and can secure data without depending on the security of the underlying database, file system, or storage volumes. The keys themselves might be on the encryption server or could even be stored in yet another system. The separate key store increases security, simplifies management of multiple encryption appliances, and helps keep keys safe for data movement – backup, restore, and migration/synchronization to other data centers.

Database Encryption

Relational database management systems (RDBMS) typically have two encryption options: transparent and column. In our layer cake above columnar encryption occurs as applications insert data into a database, whereas transparent encryption occurs as the database writes data out. Transparent encryption is applied automatically to data before it is stored at the file or disk layer. In this model encryption and key management happen behind the scenes, without the user’s knowledge or requiring application programming. The database management system handles encryption and decryption operations as data is read (or written), ensuring all data is secured, and offering very good performance. When you need finer control over data access, you can encrypt single columns, or tables, within the database. This approach offers the advantage that only authenticated users of encrypted data are able to gain access, but requires changing database or application code to manage encryption operations. With either approach there is less burden on application developers to build a crypto system, but slightly less control over who can access sensitive data.

Some third-party tools also offer transparent database encryption by automatically encrypting data as it is stored in files. These tools aren’t part of the database management system itself, so they can work with databases that don’t support TDE directly, and provide greater separation of duties for database administrators.

File Encryption

Some applications, such as payment systems and web applications, do not use databases and instead store sensitive data in files. Encryption is applied transparently as data is written to files. This type of encryption is offered as a third-party add-on to the file system, or embedded within the operating system. Encryption and decryption are transparent to both users and applications. Data is decrypted when a user requests a file, after they have authenticated to the system. If the user does not have permission to read the file, or has not provided proper credentials, they only get encrypted data. File encryption is commonly used to protect “data at rest” in applications that do not include encryption capabilities, including legacy enterprise applications and many big data platforms.

Disk/Volume Encryption

Many off-the-shelf disk drives and Storage Area Network (SAN) arrays include automatic data encryption. Encryption is applied as data is written to disk, and decrypted by authenticated users/apps when requested. Most enterprise-class systems hold encryption keys locally to support encryption operations, but rely on external key management services to manage keys and provide advanced key services such as key rotation. Volume encryption protects data in case drives are physically stolen. Authenticated users and applications are provided unencrypted copies of files and data.

Tradeoffs

In general, the further “up the stack” you deploy encryption, the more secure your data is. The price of that extra security is more difficult integration, usually in the form o application code changes. Ideally we would encrypt all data at the application layer and fully leverage user authentication, authorization, and business context to determine who can see sensitive data. In the real world the code changes required for this level of precision control are often insurmountable engineering challenges and/or cost prohibitive. Surprisingly, transparent encryption often perform faster than application-layer encryption, even with larger data sets. The tradeoff is moving high enough “up the stack” to address relevant threats while minimizing the pain of integration and management. Later in this series we will walk you through the selection process in detail.

Next up in this series: key management options.

Adrian Lane, Rich

Wednesday, February 11, 2015

Cracking the Confusion: Building an Encryption System

By Rich, Adrian Lane

This is the second post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post here

Building an Encryption System

In a straightforward application we normally break out the components – such as the encryption engine in an application server, the data in a database, and key management in an external service or appliance.

Or, for a legacy application, we might instead enable Transparent Database Encryption (TDE) for the database, with the encryption engine and data both on the same server, but key management elsewhere.

All data encryption systems are defined by where these pieces are located – which, even assuming everything works perfectly, define the protection level of the data. We will go into the different layers of data encryption in the next section, but at a high level they are:

  • In the application where you collect the data.
  • In the database that holds the data.
  • In the files where the data is stored.
  • On the storage volume (typically a hard drive, tape, or virtual storage) where the files reside.

All data flows through that stack (sometimes skipping applications and databases for unstructured data). Encrypt at the top and the data is protected all the way down, but this adds complexity to the system and isn’t always possible. Once we start digging into the specifics of different encryption options you will see that defining your requirements almost always naturally leads you to select a particular layer, which then determines where to place the components.

The Three Laws of Data Encryption

Years ago we developed the Three Laws of Data Encryption as a tool to help guide the encryption decisions listed above. When push comes to shove, there are only three reasons to encrypt data:

  1. If the data moves, physically or virtually.
  2. To enforce separation of duties beyond what is possible with access controls. Usually this only means protecting against administrators because access controls can stop everyone else.
  3. Because someone says you have to. We call this “mandated encryption”.

Here is an example of how to use the rules. Let’s say someone tells you to “encrypt all the credit card numbers” in a particular application. Let’s further say the reason is to prevent loss of data if a database administrator account is compromised, which eliminates our third reason.

The data isn’t necessarily moving, but we want separation of duties to protect the database even if someone steals administrator credentials. Encrypting at the storage volume layer wouldn’t help because a compromised administrative account still has access within the database. Encrypting the database files alone wouldn’t help either.

Encrypting within the database is an option, depending on where the keys are stored (they must be outside the database) and some other details we will get to later. Encrypting in the application definitely helps, since that’s completely outside the database. But in either cases you still need to know when and where an administrator could potentially access decrypted data.

That’s how it all ties together. Know why you are encrypting, then where you can potentially encrypt, then how to position the encryption components to achieve your security objectives.

Tokenization and Data Masking

Two alternatives to encryption are sometimes offered in commercial encryption tools: tokenization and data masking. We will spend more time on them later, but simply define them for now:

  • Tokenization replaces a sensitive piece of data with a random piece of data that can fit the same format (such as by looking like a credit card number without actually being a valid credit card number). The sensitive data and token are then stored together in a highly secure database for retrieval under limited conditions.
  • Data masking replaces sensitive data with random data, but the two aren’t stored together for later retrieval. Masking can be a one-way operation, such as generating a test database, or a repeatable operation such as dynamically masking a specific field for an application user based on permissions.

For more information on tokenization vs. encryption you can read our paper.

That covers the basics of encryption systems. Our next section will go into details of the encryption layers above before delving into key management, platform features, use cases, and the decision tree to pick the right option.

Rich, Adrian Lane

Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications

By Rich, Adrian Lane

This is the first post in a new series. If you want to track it through the entire editing process, you can follow it and contribute on GitHub.

The New Age of Encryption

Data encryption has long been part of the information security arsenal. From passwords, to files, to databases, we rely on encryption to protect our data in storage and on the move. It’s a foundational element in any security professional’s education. But despite its long history and deep value, adoption inside data centers and applications has been relatively – even surprisingly – low.

Today we see encryption growing in the data center at an accelerating rate, due to a confluence of reasons. A trite way to describe it is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even their own).

And thanks to increasing demand, there is a growing range of options, as vendors and even free and Open Source tools address the opportunity. We have never had more choice, but with choices comes complexity; and outside of your friendly local sales representative, guidance can be hard to come by.

For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage – or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it meet compliance requirements?

This paper cuts through the confusion to help you pick the best encryption options for your projects. In case you couldn’t guess from the title, our focus is on encrypting in the data center – applications, servers, databases, and storage. Heck, we will even cover cloud computing (IaaS: Infrastructure as a Service), although we covered that in depth in another paper. We will also cover tokenization and its relationship with encryption.

We won’t cover encryption algorithms, cipher modes, or product comparisons. We will cover different high-level options and technologies, such as when to encrypt in the database vs. in the application, and what kinds of data are best suited for tokenization. We will also cover key management, some essential platform features, and how to tie it all together.

Understanding Encryption Systems

When most security professionals first learn about encryption the focus is on keys, algorithms, and modes. We learn the difference between symmetric and asymmetric and spend a lot of time talking about Bob and Alice.

Once you start working in the real world your focus needs to change. The fundamentals are still important but now you need to put them into practice as you implement encryption systems – the combination of technologies that actually protects data. Even the strongest crypto algorithm is worthless if the system around it is full of flaws.

Before we go into specific scenarios let’s review the basic concepts behind building encryption systems because this becomes the basis for decisions on which encryption options to go use.

The Three Components of a Data Encryption System

When encrypting data, especially in applications and data centers, knowing how and where to place these pieces is incredibly important, and mistakes here are one of the most common causes of failure. We use all our data at some point, and understanding where the exposure points are, where the encryption components reside, and how they tie together, all determine how much actual security you end up with.

Three major components define the overall structure of an encryption system.

  • The data: The object or objects to encrypt. It might seem silly to break this out, but the security and complexity of the system depend on the nature of the payload, as well as where it is located or collected.
  • The encryption engine: This component handles actual encryption (and decryption) operations.
  • The key manager: This handles keys and passes them to the encryption engine.

In a basic encryption system all three components are likely located on the same system. As an example take personal full disk encryption (the built-in tools you might use on your home Windows PC or Mac): the encryption key, data, and engine are all stored and used on the same hardware. Lose that hardware and you lose the key and data – and the engine, but that isn’t normally relevant. (Neither is the key, usually, because it is protected with another key, or passphrase, that is not stored on the system – but if the system is lost while running, with the key in memory, that becomes a problem). For data centers these major components are likely to reside on different systems, increasing complexity and security concerns over how the pieces work together.

Rich, Adrian Lane

Monday, February 09, 2015

Firestarter: It’s Not My Fault!

By Rich

Rich, Mike, and Adrian each pick a trend they expect to hammer us in 2015. Then they talk about it, probably too much. From threat intel to tokenization to SaaS security.

And oh, we did have to start with a dig at the Pats. Cheating? Super Bowl? Really? Come on now.

Watch or listen:


—Rich

Sunday, February 08, 2015

Applied Threat Intelligence: Building a TI Program

By Mike Rothman

As we wrap up our Applied Threat Intelligence series, we have already defined TI and worked our way through a number of the key use cases (security monitoring, incident response, and preventative controls) where TI can help improve your security program, processes, and posture. The last piece of the puzzle is building a repeatable process to collect, aggregate, and analyze the threat intelligence. This should include a number of different information sources, as well as various internal and external data analyses to provide context to clarify what the intel means to you.

As with pretty much everything in security, handing TI is not “set and forget”. You need to build repeatable process to select data providers and continually reassess the value of those investments. You will need to focus on integration; as we described, data isn’t helpful if you can’t use it in practice. And your degree of comfort in automating processes based on threat intelligence will impact day-to-day operational responsibilities.

First you need to decide where threat intelligence function will fit organizationally. Larger organizations tend to formalize an intelligence group, while smaller entities need to add intelligence gathering and analysis to the task lists of existing staff. Out of all the things that could land on a security professional, an intelligence research responsibility isn’t bad. It provides exposure to cutting-edge attacks and makes a difference in your defenses, so that’s how you should sell it to overworked staffers who don’t want yet another thing on their to-do lists.

But every long journey begins with the first step, so let’s turn our focus to collecting intel.

Gather Intelligence

Early in the intelligence gathering process you focused your efforts with an analysis of your adversaries. Who they are, what they are most likely to try to achieve, and what kinds of tactics they use to achieve their missions – you need to tackle all these questions. With those answers you can focus on intelligence sources that best address your probable adversaries. Then identify the kinds of data you need. This is where the previous three posts come in handy. Depending on which use cases you are trying to address you will know whether to focus on malware indicators, compromised devices, IP reputation, command and control indicators, or something else.

Then start shopping. Some folks love to shop, others not so much. But it’s a necessary evil; fortunately, given the threat intelligence market’s recent growth, you have plenty of options. Let’s break down a few categories of intel providers, with their particular value:

  • Commercial: These providers employ research teams to perform proprietary research, and tend to attain highly visibility by merchandising findings with fancy exploit names and logos, spy thriller stories of how adversary groups compromise organizations and steal data, and shiny maps of global attacks. They tend to offer particular strength regarding specific adversary classes. Look for solid references from your industry peers.
  • OSINT: Open Source Intelligence (OSINT) providers specialize in mining the huge numbers of information security sources available on the Internet. Their approach is all about categorization and leverage because there is plenty of information available free. These folks know where to find it and how to categorize it. They normalize the data and provide it through a feed or portal to make it useful for your organization. As with commercial sources, the question is how valuable any particular source is to you. You already have too much data – you only need providers who can help you wade through it.
  • ISAC: There are many Information Sharing and Analysis Centers (ISAC), typically built for specific industries, to communicate current attacks and other relevant threat data among peers. As with OSINT, quality can be an issue, but this data tends to be industry specific so its relevance is pretty well assured. Participating in an ISAC obligates you to contribute data back to the collective, which we think is awesome. The system works much better when organizations both contribute and consume intelligence, but we understand there are cultural considerations. So you will need to make sure senior management is okay with it before committing to an ISAC.

Another aspect of choosing intelligence providers is figuring out whether you are looking for generic or company-specific information. OSINT providers are more generic, while commercial offerings can go deeper. Though various ‘Cadillac’ offerings include analysts dedicated specifically to your organization – proactively searching grey markets, carder forums, botnets, and other places for intelligence relevant to you.

Managing Overlap

With disparate data sources it is a challenge to ensure you don’t waste time on multiple instances of the same alert. One key to determining overlap is an understanding of how the intelligence vendor gets their data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? You can categorize vendors by their tactics to help you pick the best for your requirements.

To choose between vendors you need to compare their services for comprehensiveness, timeliness, and accuracy. Sign up for trials of a number of services and monitor their feeds for a week or so. Does one provider consistently identify new threats earlier? Is their information correct? Do they provide more detailed and actionable analysis? How easy will it be to integrate their data into your environment for your use cases.

Don’t fall for marketing hyperbole about proprietary algorithms, Big Data analysis, or staff linguists penetrating hacker dens and other stories straight out of a spy novel. It all comes down to data, and how useful it is to your security program. Buyer beware, and make sure you put each intelligence provider through its paces before you commit.

Our last point to stress is the importance of short agreements, especially up front. You cannot know how these services will work for you until you actually start using them. Many of these intelligence companies are startups, and might not be around in 3 or 4 years. Once you identify a set of core intelligence feeds, longer deals can be cut, but we recommend against doing that before your TI process matures and your intelligence vendor establishes a track record addressing your needs.

To Platform or Not to Platform

Now that you have chosen intelligence feeds, what will you do with the data? Do you need a stand-alone platform to aggregate all your data? Will you need to stand up yet another system in your environment, or can you leverage something in the cloud? Will you actually use your intelligence providers’ shiny portals? Or do you expect alerts to show up in your existing monitoring platforms, or be sent via email or SMS?

There are many questions to answer as part of operationalizing your TI process. First you need to figure out whether you already have a platform in place. Existing security providers (specifically SIEM and network security vendors) now offer threat intelligence ‘supermarkets’ to enable you to easily buy and integrate data into their monitoring and control environments. Even if your vendors don’t offer a way to easily buy TI, many support standards such as STIX and TAXXI to facilitate integration.

We are focusing on applied threat intelligence, so the decision hinges on how you will use the threat intelligence. If you have a dedicated team to evaluate and leverage TI, have multiple monitoring and/or enforcement points, or want more flexibility in how broadly you use TI, you should probably consider a separate intelligene platform or ‘clearinghouse’ to manage TI feeds.

Selecting the Platform

If you decide to look at stand-alone threat intelligence platforms there are a couple key selection criteria to consider:

  1. Open: The TI platform’s task is to aggregate information, so it must be easy to get information into it. Intelligence feeds are typically just data (often XML), and increasingly distributed in industry-standard formats such as STIX which make integration relatively straightforward. But make sure any platform you select will support the data feeds you need. Make sure you can use the data that’s important to you, and will not be restricted by your platform.
  2. Scalable: You will use a lot of data in your threat intelligence process, so scalability is important. But computational scalability is likely more important – you will be intensively search and mine aggregated data so you need robust indexing. Unfortunately scalability is hard to test in a lab, so ensure your proof of concept is a close match to your production environment.
  3. Search: Threat intelligence (like the rest of security) doesn’t lend itself to absolute answers. So make TI the start of your process of figuring out what happened in your environment, and leverage the data for your particular use cases as we described earlier. One clear requirement, for all use cases, is search. So make sure your platform makes it easy to search all your TI data sources.
  4. Urgency Scoring: Applied Threat Intelligence is all about betting on which attackers, attacks, and assets are the most important to worry about, so you will find considerable value in a flexible scoring mechanism. Scoring factors should include assets, intelligence sources, and attacks, so you can calculate an urgency score. It might be as simple as red/yellow/green, depending on the sophistication of your security program.

Determining Relevance in the Heat of Battle

So how can you actually use the threat intelligence you painstakingly collected and aggregated? Relevance to your organization depends on the specifics of the threat, and whether it can be used against you. Focus on real potential exploits – a vulnerability which does not exist in your environment is not a real concern. For example you probably don’t need to worry about being financial malware if you don’t hold or have access to credit card data. That doesn’t mean you shouldn’t pay any attention to these attacks – many exploits leverage a variety of interesting tactics, which might become a part of a relevant attack in the future. Relevance encompasses two aspects:

  1. Attack surface: Are you vulnerable to the specific attack vector? Weaponized Windows 2000 exploits aren’t relevant if you don’t have any Windows 2000 systems. Once you have patched all instances of a specific vulnerability on your devices you get a respite from worrying about the exploit. Your asset base and internally collected vulnerability information provide this essential context.
  2. Intelligence Reliability: You need to keep re-evaluating each threat intelligence feed to determine its usefulness. A feed which triggers many false positives is less relevant. On the other hand, if a feed usually nails a certain type of attack, you should those warnings particularly seriously. Note that attack surface may not be restricted to your own assets and environment. Service providers, business partners, and even customers represent indirect risks – if one of them is compromised, an attacker might have a direct path to you.

Constantly Evaluating Intelligence

How can you determine the reliability of a TI source? Threat data ages very quickly and TI sources such as IP reputation can change hourly. Any system you use to aggregate threat intelligence should be able to report on the number of alerts generated from each TI source, without hurting your brain building reports. These reports show value from your TI investment – it is a quick win if you can show how TI identified an attack earlier than you would have detected it otherwise. Additionally, if you use multiple TI vendors, these reports enable you to compare them based on actual results.

Marketing Success Internally

Over time, as with any security discipline, you will refine your verification/validation/investigation process. Focus on what worked and what didn’t, and tune your process accordingly. It can be bumpy when you start really using TI sources – typically you start by receiving a large number of alerts, and following them down a bunch of dead ends. It might remind you, not so fondly, of the SIEM tuning process. But security is widely regarded as overhead, so you need a Quick Win with any new security technology.

TI will find stuff you didn’t know about and help you get ahead of attacks you haven’t seen yet. But that success story won’t tell itself, so when the process succeeds – likely early on – you will need to publicize it early and often. A good place to start is discovery of an attack in progress. You can show how you successfully detected and remediated the attack thanks to threat intelligence. This illustrates that you will be compromised (which must be constantly reinforced to senior management), so success is a matter of containing damage and preventing data loss. The value of TI in this context is in shortening the window between exploit and detection.

You can also explain how threat intelligence helped you evolve security tactics based on what is happening to other organizations. For instance, if you see what looks like a denial of service (DoS) attack on a set of web servers, but already know from your intelligence efforts that DoS is a frequent decoy to obscure exfiltration activities, you have better context to be more sensitive to exfiltration attempts. Finally, to whatever degree you quantify the time you spend remediating issues and cleaning up compromises, you can show how much you saved by using threat intelligence to refine efforts and prioritize activities.

As we have discussed through this series, threat intelligence can even the battle between attackers and defenders, to a degree. But to accomplish this you must be able to gather relevant TI and leverage it in your processes. We have shown how to use TI both a) at the front end of your security process (in preventative controls) to disrupt attacks, and b) to more effectively monitor and investigate attacks – both in progress and afterwards. We don’t want to portray any of this as ‘easy’, but nothing worthwhile in security is easy. It is about constantly improving your processes to favorably impact your security posture, on an ongoing and continuous basis.

—Mike Rothman