Securosis

Research

Database Denial of Service [New Series]

We have begun to see a shift in Denial of Service (DoS) tactics by attackers, moving up the stack from networks to servers and from servers to the application layer. Over the last 18 months we have also witnessed a new wave of vulnerabilities and isolated attacks against databases, all related to denial of service. We have seen recent issues with Oracle with invalid object pointers, a serious vulnerability in the workload manager, the TNS listener barfing on malformed packets, a PostgreSQL issue with unrestricted networking access that was rumored to allow file corruption to crash the database, the IBM DB2 XML feature, and multiple vulnerabilities in MySQL including remote ability to crash the database. A vulnerability does not mean that exploitation has occurred but we hear more off-the-record accounts of database attacks. We cannot quantify the risk or likelihood of attack, but this seems like a good time to describe these attacks briefly and offer some mitigation suggestions. It may come as a surprise but database denial of service attacks have been common over the last decade. We don’t hear much about them because they are lost among the din of SQL injection (SQLi) attacks, which cause more damage and offer attackers a wider range options. All things being equal, attackers generally prefer SQLi attacks as more directly useful for their objectives. Database DoS doesn’t make headlines compared to SQLi, because injection attacks often take control of the database and can be more damaging. But interruption of service is no longer a trivial matter. Ten years ago it was still common practice to take a database or application off the Internet while an attack was underway. But now web services and the databases are tied into them are critical business infrastructure. Take down a database and a company loses money – quite possibly a lot of money. As Mike noted in his recent research on Denial of Service attacks, the most common DoS approaches are “flooding the pipes” rather than “exhausting the servers”. Flooding the pipes is accomplished by sending so many network packets that they simply overwhelm the network equipment. This type of volumetric attack is the classic denial of service, most commonly performed as a Distributed Denial of Service (DDoS) because it takes hundreds or thousands of malicious clients to flood a large network. Legitimate network traffic is washed away in the tide of junk, and users cannot reach servers. Exhausting servers is different – these attacks target software running on the server, such as the operating system or web application components – to waste all its CPU, memory, or other resources and effectively disable it. These attacks can target either vulnerabilities or features of application stacks to overwhelm servers and prevent legitimate traffic from accessing web pages or completing transactions. The insidious part of this for attack is that, as you consume more than roughly 80% of hardware or software resources, these platforms become less efficient. The closer they get to maximum utilization the more they slow down. Push them to the limit and they may simply lock up, waiting for resources to become available. In some cases a reduction in load does not bring servers back – you need to reset or restart them. Databases have their own networking features and offer a full complement of services, so both these models apply. The motivation for attacks is very similar to traditional DoS attacks. Hacktivism is a major trend, and taking down a major commercial web site is a weapon for people who dislike a company but lack legal or financial means to voice their complaints. “Covering attacks” are very common, where criminals flood servers and networks – including security systems – in order to mask an ongoing attack. common scenarios include shutting down a competitor, criminal racketeers threatening DoS and demanding ransom, and financial trading manipulation, and the list goes on. The motivations behind database DoS are essentially the same. The current tactics are a response to a couple new factors. Network and server defenses are getting better with the next generation of firewall technologies, and it has gotten nearly impossible to DoS cloud services providers with seemingly limitless redundant, and geographically dispersed resources. Attackers are looking for new ways to keep old crimes profitable. But attackers are not discriminatory – they are happy to exploit any piece of hardware or software that allows them to accomplish their attacks, including web applications and databases sitting atop servers. Database denial of service is conceptually no different than traditional DoS attacks at the sever or application layers, but there are many more clever ways to create a denial of service attack against a database. Unlike DDoS you don’t need to throw everything including the kitchen sink at a site – often you just need to find a small logic flaw in a database function to push it over. Relational database platforms are some of the most complex application platforms in existence so there is a lot of room for mischief. Attackers sometimes morph traditional protocol and server based denial of service attacks to move up the stack. But in most cases they exploit specific database features in novel ways to take down their targets. Current defensive systems are geared to block DoS-based network flooding and server attacks, so attackers are seeking greener fields in the application layer to better blend their incursions with legitimate customer transactions. With protection resources poured into the lower layers, relatively little is done at the application layer, and virtually nothing to stop database attacks. Worse, application layer attacks are much more difficult to detect because most look like legitimate database requests! Our next post will take a look at the different classes of database DoS attacks. I will look at some historic examples of database DoS attacks and discuss current ones to help you understand the difficulty of defending databases from DoS. Share:

Share:
Read Post

API Gateways: Developer Tools

Our previous post discussed the first step in the development process: getting access to the API gateway through access provisioning. Now that you have access it’s time to discuss how the gateway supports your code development and deployment processes. An API gateway must accomplish two primary functions: help developers build, test, and deploy applications; and help companies control use of their API. They are part development environment and part operational security tool. API Catalog The APIs catalog is basically a menu of APIs, services, and support services that provide developers front-end integration to access back-office applications, external APIs (for mashups), data and related services, along with all the supporting tools to build and deploy applications. Catalogs typically include APIs, documentation, coding help, build tools, configuration requirements, testing tools, guidance, and sample code for each supported function. They offer other relevant details such as network controls, access controls, integration options, orchestration, brokering and messaging options – all bundled into a management interface for selecting and configuring the services you want. Developer time is expensive so anything that streamlines this process is a win. Security controls such as identity protocols are notoriously difficult to fully grasp and implement. If your security architects want developers to “do it right”, this is the place to invest time to show them how. Traditionally security tools are bolted onto – or in front of – applications, generating howls of displeasure from developers who don’t want the added complexity nor performance impact. With third-party APIs things are different, as security is part of the core value. API gateways offer features than enable network, interface, and data security as part of the core feature set. For example it is faster and easier to enable built-in SAML or OAuth identity services than to build them from scratch – or worse to build a password management system. Even better, the features are available at design time, before you assemble the application, so they can be bundled into the development process. Reference implementations are extremely helpful. For example, consider OAuth: if you look at 10 different companies’ OAuth implementations you will probably find a dozen different implementations. Don’t assume developers will just figure it all out – connect the dots. To have a chance at a secure deployment developers need concrete guidance for security services – especially for things as abstract as identity protocols. Reference implementations show end-to-end examples of the identity protocol in practice. For a developer trying to “do it right” this is like finding diamonds in the backyard. The reference implementation is even more effective if it is backed up by testing tools that can verify developer implementations. Access management is a principal feature of API gateways. The gateway helps you enforce access controls, building in authentication and authorization services into the API set. Gateways typically rely on token-based security services, and support one or more token services such as SAML and OAuth. All API gateways offer authentication support, and most integrate with other identity sources to support federation. Gateways provide basic role-based authorization support, sometimes with fine-grained authorization to constrain data access by user identity or endpoint device. Beyond identity protocols, some gateways offer services to defend against attacks such as replay attacks and other forms of session hijacking. API gateways provide dynamic filtering of requests, allowing policy-based routing and response to API calls. Developers get tools to parse incoming calls, filter or transform messages, and then route to appropriate services. This facilitates modification of application function, debugging of application functions, and application of different security or compliance controls in response to user requests. Filters also provide a mechanism for sending requests to different locations, workflow modification, or even sending requests to different applications. This flexibility is a powerful security capability, particularly for analysis of and protection against suspect clients – access to services and data can be dynamically adjusted. API gateway providers offer a range of pre-deployment tools to validate applications prior to deployment. Sandbox testing and runtime simulators both validate correct API usage, and can also verify that the application developer properly handles input variables and simulated attacks. Some test harnesses are provided with gateways and others are custom implementations by API service owners. Pre-deployment validation is good a way to ensure all third-party developers meet a minimum security standard, and no single user becomes the proverbial weak link. If possible, tests should be executed as part of the normal integration process, (i.e., Jenkins) so implementation quality can be continually tested. Deployment Support The API catalog provides options for how to build security into your application, but API gateways also offer deployment support. When you are push APIs that connect the world to internal systems you need to account for a myriad of different threats at multiple network, protocol, application, and data layers. Denial of service, parser attacks, code injection, replay attacks, HTTP protocol abuse, network sniffing, and denial of service attacks are all things to consider. API gateways can optionally provide privacy and security for network sessions through SSL. Most also offer network firewall capabilities such as IP whitelisting, blacklisting, and signature-based detection. While network security is a must have for many, it’s not really their core value to security. The key security features are overall security of the API and message-level filtering. API gateways provide capabilities to detect code injection, cross-site scripting, and various encoding attacks; most also offer off-the-shelf filters for input validation and sanitization. Logging, Monitoring, and Reporting As an application platform API gateways capture activity and generate audit logs. Sitting between developer applications and the API, they are perfectly positioned to capture API usage – useful for throttling, billing, and metering API access, as well as security. Log files are essential for security, operations, and compliance, so these teams all rely upon gateway audit trails. Most API gateways provide flexible configuration of which audit events are collected, record format, and record destination. Audit capabilities are mostly designed for the gateway owner rather than developers. But the audit trail captures sessions of all

Share:
Read Post

iOS 7 Adds Major Data Security Improvements

Apple posted a page with some short details on the new business features of iOS 7. These security enhancements actually change the game for iOS security and BYOD: Data protection is now enabled by default for all applications. That means apps’ data stores are encrypted with the user passcode. For strongish passphrases (greater than 8 characters is a decent start) this is very strong security and definitely up to enterprise standards if you are on newer hardware (iPhone 4S or later, for sure). You no longer need to build this into your custom enterprise apps (or app wrappers) unless you don’t enforce passcode requirements. Share sheets provide the ability to open files in different applications. A new feature allows you, through MDM I assume, to manage what apps email attachments can open in. This is huge because you get far greater control of the flow on the device. Email is already encrypted with data protection and managed through ActiveSync and/or MDM; now that we can restrict which apps files can open in, we have a complete, secure, and managed data flow path. Per-app VPNs allow you to require an enterprise app, even one you didn’t build yourself, to use a specific VPN to connect without piping all the user’s network traffic through you. To be honest, this is a core feature of most custom (including wrapped) apps, but allowing you to set it based on policy instead of embedding into apps may be useful in a variety of scenarios. In summary, some key aspects of iOS we had to work around with custom apps can now be managed on a system-wide level with policies. The extra security on Mail may obviate the need for some organizations to use container apps because it is manageable and encrypted, and data export can be controlled. Now it all comes down to how well it works in practice. A couple other security bits are worth mentioning: It looks like SSO is an on-device option to pass credentials between apps. We need a lot more detail on this one but I suspect it is meant to tie a string of corporate apps together without requiring users to log in every time. So probably not some sort of traditional SAML support, which is what I first thought. Better MDM policies and easier enrollment, designed to work better with your existing MDM tools once they support the features. There are probably more but this is all that’s public now. The tighter control over data flow on the device (from email) is unexpected and should be well received. As a reminder, here is my paper on data security options in iOS 6. Share:

Share:
Read Post

Casting out SQLi

Ericka Chickowski posted an interview with the creators of the open source library AntiSQLi at Dark Reading. She is discussing a very interesting development tool, but the value proposition gets somewhat lost in the creators’ poor terminology. First some background: there is no such thing as an ‘unparameterized’ database query. Every SQL query has at least 2 parameters: The contents of the SELECT and WHERE clauses. Without parameters in those two clauses the query fails in the parser and generates an error. No parameters, no query. So SQLi is not really a problem of ‘unparameterized’ queries – it is a problem of unvalidated input values to parameters. SQLi is where we shove bad data into parameters – not a lack of parameters! The AntiSQLi library is simple and clever: it works like an app-side stored procedure. And like a stored procedure it forces datas type on its input values. It essentially handles the casting operation to force type and length. AntiSQLi weeds out variables that don’t match the prescribed data type, and extra long variables in some cases. Obviously it cannot catch everything but it does filter out many common and crude SQLi attacks. A better term would have been “un-cast query parameters”. Regardless of the terminology, though, I am happy to see innovation in this area. For years I have been recommending that developers build this functionality into their own reusable security libraries, but AntiSQLi is a quick and easy way to get started, and a nice tool to have in your toolbox. Share:

Share:
Read Post

Incite 6/26/2013: Camp Rules

June is a special time for us. School is over and we take a couple weeks to chill before the kids head off to camp. Then we head up to the Delaware beach where the Boss and I met many moons ago, and then put the kids on the bus to sleepaway camp. This year they are all going for 6 1/2 weeks. Yes, it’s good to be our kids. We spend the rest of the summer living vicariously through the pictures we see on the camp’s website. The title of today’s Incite has a double meaning. Firstly, camp does rule. Just seeing the kids renew friendships with their camp buddies at the bus stop and how happy they are to be going back to their summer home. If it wasn’t for all these damn responsibilities I would be the first one on the bus. And what’s not to love about camp? They offer pretty much every activity you can imagine, and the kids get to be pseudo-independent. They learn critical life lessons that are invaluable when they leave the nest. All without their parents scrutinizing their every move. Camp rules! But there are also rules that need to be followed. Like being kind to their bunkmates. Being respectful to their counselors and the camp administrators. Their camp actually has a list of behavioral expectations we read with the kids, which they must sign. Finally, they need to practice decent hygiene because we aren’t there to make sure it happens. For the girls it’s not a problem. 3 years ago, when XX1 came back from camp, she was hyper-aware of whether she had food on her face after a meal and whether her hair looked good. Evidently there was an expectation in her bunk about hygiene that worked out great. XX2 has always been a little fashionista and takes time (too much if you ask me) for her appearance, so we know she’ll brush her hair and keep clean. We look forward to seeing what new look XX2’s going with in the pictures we see every couple of days. The Boy is a different story. At home he needs to be constantly reminded to put deodorant on, and last summer he didn’t even know we packed a brush for his hair. Seriously. He offered a new definition for ‘mophead’ after a month away. Being proactive, I figured it would be best if I laid out the camp rules very specifically for the Boy. So in the first letter I sent him I reminded him of what’s important: Here is my only advice: Just have fun. And more fun. And then have some additional fun after that. That’s your only responsibility for the next 6 1/2 weeks. And you should probably change your underwear every couple of days. Also try not to wear your Maryland LAX shorts every day. Every other day is OK… The Boss thought it was pretty funny until she realized I was being serious. Boys will be boys – even 44-year-old boys… –Mike Photo credit: “Outhouse Rules” originally uploaded by Live Life Happy Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. API Gateways Access Provisioning Security Enabling Innovation Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Network-based Malware Detection 2.0 Deployment Considerations The Network’s Place in the Malware Lifecycle Scaling NBMD Evolving NBMD Advanced Attackers Take No Prisoners Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U You, yes you. You are a security jerk. @ternus had a great post abut being an Infosec Jerk, which really hits on core issue hindering organizations’ willingness to take security seriously. It comes down to an incentive problem, as most behaviors do. @ternus sums it up perfectly: “Never attribute to incompetence that which can be explained by differing incentive structures.” Developers and ops folks typically have little incentive to address security issues. But they do have incentive to ship code or deploy servers and apps. We security folks don’t add much to the top line so we need to meet them more than halfway, and the post offers some great tips on how to do that. Also read The Phoenix Project to get a feel for how to make a process work with security built in. Or you can continue to be a jerk. How’s that working out so far? – MR False confidence: No, it’s not surprising that most companies don’t use big data for security analytics, per the findings of a recent McAfee study. Most security teams don’t know what big data is yet, much less use it for advanced threat and event analysis. But the best part of the study was the confidence of the respondents – over 70% were confident they could identify insider threats and external attacks. Which is ironic as that is the percentage of breaches detected by people outside their organization. Maybe it’s not their security products that give them confidence, but the quality of their customers or law enforcement who notify them of breaches. But seriously, if we agree that big data can advances security the reason most customers can’t harness that value is that they are waiting for their vendors to deliver, but the vendors are not quite there yet. – AL You break it, you own it: Although it is very far from perfect, one of the more effective security controls in the Apple universe is the application vetting process. Instead of running an open marketplace, Apple reviews all iOS and Mac apps that come into their stores. They definitely don’t catch everything, but it is impossible to argue that this process hasn’t reduced the spread of malware – the number

Share:
Read Post

Top 10 Stupid Sales/Press/Analyst Presentation Tricks

If you see any of these in a vendor sales/analyst presentation, run fast. They open with, “this is under NDA” or “this is confidential” and you have never signed an NDA. The word “unique”. Especially in the same sentence as “industry leader”. If you are unique, you are, by definition, both the leader and the worst piece of crap out there. You do not want to be Schroedinger’s cat; it never ends well. No screenshots of the product until slide 43, addendum 7, behind a slide that says, “stairs out, beware of tiger”. No slides describing how the technology works. Bonus points if they won’t tell you because a) they are in stealth mode, b) it is a trade secret, or c) their investors won’t let them talk about it until the patent is issued (expected August 12, 2046). How you see the industry or world. Just tell us what problem you solve – we decide whether it is more important than the other 274 items on our to-do list. Bonus points if you refuse to skip this section when asked. A slide of company logos you aren’t supposed to put on slides because it violates your contract. Always amusing when the same logo is in every competitor’s slide decks as well. Any reference to Katrina, Pearl Harbor, or 9/11. Use chaff if they append “digital” to any of those words. We stop the APTs. (Some grammar fails are worse than others). The term “insider threat”, unless you sell to prisons or proctologists. Any reference to Edward Snowden, Unless you are actually the NSA (or Booze Allen, but for other reasons). I’m not trying to slam any vendor, and for the most part both the product people and the smart marketing execs I spend most of my time with roll their eyes at all of this as well, but man, it sure is happening a lot lately. Share:

Share:
Read Post

The Black Hole of DLP

I was talking to yet another contact today who reinforced that almost no one is sniffing SSL traffic when they deploy DLP. That means… No monitoring of most major webmail providers. No monitoring of many social networks. No monitoring of Dropbox or other cloud storage services. No monitoring of connections to any site that requires a login. Don’t waste your money. If you aren’t going to use DLP to monitor SSL/TLS encrypted web traffic you might as well stick to email, endpoint, or other channels. I’m sure no one will siphon off sensitive stuff to Gmail. Nope, never happens. Especially not after you block USB drives. Share:

Share:
Read Post

Automation Awesomeness and Your Friday Summary (June 21, 2013)

I am intensely lazy. If you read anything by Tim Ferris (the “4 Hour X” guy), you have heard him talk about Minimum Effective Dose. What is the least you can do to achieve your objective? In some ways that’s how I define my life. Not that I am above hard work. You don’t swim/bike/run for 3-4 hours, climb mountains, hike the back bowls, or participate in intense all-day rescues without a little hard work. Sometimes I even enjoy getting my hands dirty – especially since I started spending most of my time at a desk. In other words, if something interests me, I’m all on it. But if it isn’t fun to me in some way, I will do everything in my power to minimize the time I need to spend on it. I’m on my third robot vacuum (A Neato, which is like a Cylon compared to the mousebot that is iRobot), pay a landscaper, have hired someone to clean my garage, and even confused a handyman I used to install some home automation switches (I like the programming – just not shocking the crap out myself because I’m too lazy to walk outside and hit the breaker). I relatively recently subscribed to FancyHands so I can email off requests to format papers, call various services that otherwise put you on hold for an hour, or research the nearest Mexican food to my current hotel. So I am really digging all the new automation options with cloud computing and our new API-driven world. This week I have been working on using Chef for security and figuring out the interplay between Chef and Amazon Web Services or OpenStack to enhance security automation. Most of this is to have some advanced material on hand for our Black Hat cloud security class next month, but the fact that I am putting the work in probably means we will end up with one of those classes where nobody groks command lines. The first add-on will be using Chef and OpsWorks to 1-click build out the secure demo application stack we put together for the labs, and push patches out to hundreds of systems with a second click (not that we will run hundreds – that might annoy Accounts Payable). If I have enough time I may have a Ruby app that simultaneously connects to AWS and Chef and monitors for any instances not managed by Chef, and instantly quarantines them and identifies the owner. (I have the pseudocode worked out but haven’t programmed Ruby much, so that will take some time.) Those are just two simple examples of integrating security automation. It wouldn’t be hard to extend the tool to automatically run vulnerability scans (randomly or after patch pushes), then use Chef to auto-patch noncompliant systems, and then kick off a report. You could even spin up a pen-testing instance inside the same Security Group, run a scan, send off the results, and shut it down automatically on completion. Heck, even these ideas are just scratching the surface. This kind of automation is powerful. If properly set up, it becomes extremely difficult for admins or developers to run anything that violates security policies. But it is a different way of thinking and requires different architectures so important things don’t go down when the Software Defined Security breaks them. Which it will – that’s what we actually want it to do. Anyway, I now need to go learn the absolute minimum amount of Chef and Ruby to hack together my demonstrations, and I’m about two weeks behind schedule. I might need to go outsource some of this to save myself some time… On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich at Macworld on Apple’s security design approach. Rich at Dark Reading on security design. Noticing a trend? Mike at Dark Reading on bug bounties (before the big Microsoft announcement – nice timing!). Talking Head Alert: Adrian on Key Management. Favorite Securosis Posts Adrian Lane: Microsoft Offers Six Figure Bounty for Bugs. Blue Hat Bug Bounties for Big Coin. Nice! Rich: Network-based Malware Detection 2.0: Deployment Considerations. Great series. Other Securosis Posts Scamables. How China Is Different. Security Analytics with Big Data: Deployment Issues. Project Communications. API Gateways: Access Provisioning. Favorite Outside Posts Adrian Lane: Edge Services in the Cloud. Open source tools for building out client services in a massively scalable way. Look at the request lifecycle and you will probably get an idea of how security would be implemented as a series of HTTP filters. You can even ‘canary’ test specific users onto different code, perhaps routing to an intrusion deception model… This is some very cool stuff! Rich: Dealing with eventual consistency in the AWS EC2 API. As we move into Software Defined Security, these sorts of issues will really annoy the f### out of us. Rich (2): Had to add this one: I ain’t in Kansas anymore… The real world is tough. Dave Lewis: Sr. Information Security Analyst. Take Dave’s old job! Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Harvard Business Review Posts Terrible Advice for CEOs on Information Security. Yahoo’s Very Bad Idea to Release Email Addresses. US, Russia to install “cyber-hotline” to prevent accidental cyberwar. Scores of vulnerable SAP deployments uncovered. Zeus Money Mule Recruiting Scam Targets Job Seekers. Wearing a mask at a riot is now a crime. Secret Sqrrl: NSA “spin-off” company releases data mining tool. Pack your bags for possible jail term, judge tells IBM worker over disk row. NSA leaks hint Microsoft may have lied about Skype security. Blog Comment of the Week This week’s best comment goes to Patrick, in response to API Gateways: Access Provisioning.

Share:
Read Post

Full Disk Encryption (FDE) Advice from a Reader

I am doing some work on FDE (if you are using the Securosis Nexus, I just added a small section on it), and during my research one of our readers sent in some great advice. Here are some suggestions from Guillaume Ross @gepeto42: Things to Check before Deploying FDE Support Ensure the support staff that provides support during business days is able to troubleshoot any type of issue or view any type of logs. If the main development of the product is in a different timezone, ensure this will have no impact on support. I have witnessed situations where logs were in binary formats that support staff could not read. They had to be sent to developers on a different continent. The back and forth for a simple issue can quickly turn into weeks when you can only send and receive one message per day. If you are planning a massive deployment, ensure the vendor has customers with similar types of deployments using similar methods of authentication. Documentation Look for a vendor who makes documentation available easily. This is no different than for any enterprise software, but due to the nature of encryption and the impact software with storage related drivers can have on your endpoint deployments and support, this is critical. (Rich: Make sure the documentation is up to date and accurate. We had another reader report on a critical feature removed from a product but still in the documentation – which lead to every laptop being encrypted with the same key. Oops.) Local and remote recovery Some solutions offer a local recovery solution that allow the user to resolve forgotten password issues without having to call support to obtain a one time password. Think about what this means for security if it is based on “secret questions/answers”. Test the remote recovery process and ensure support staff have the proper training on recovery. Language If you have to support users in multiple languages and/or multiple language configurations, ensure the solution you are purchasing has a method for detecting what keyboard should be used. It can be frustrating for users and support staff to realize a symbol isn’t in the same place on the default US keyboard and on a Canadian French keyboard. Test this. (Rich: Some tools have on-screen keyboards now to deal with this. Multiple users have reported this as a major problem.) Password complexity and expiration If you sync with an external source such as Active Directory, consider the fact that most solutions offer offline pre-boot authentication only. This means that expired passwords combined with remote access solutions such as webmail, terminal services, etc. could create support issues. Situation: The user goes home. Brings his laptop. From home, on his own computer or tablet, uses an application published in Citrix, which prompts him to change his Active Directory password which expired. The company laptop still has the old password cached. Consider making passwords expire less often if you can afford it, and consider trading complexity for length as it can help avoid issues between minor keyboard mapping differences. Management Consider the management features offered by each vendor and see how they can be tied to your current endpoint management strategy. Most vendors offer easy ways to configure machines for automatic booting for a certain period or number of boots to help with patch management, but is that enough for you to perform an OS refresh? Does the vendor provide all the information you need to build images with the proper drivers in them to refresh over an OS that has FDE enabled? If you never perform OS refreshes and provide users with new computers that have the new OS, this could be a lesser concern. Otherwise, ask your vendor how you will upgrade encrypted workstations to the next big release of the OS. Authentication There are countless ways to deal with FDE authentication. It is very possible that multiple solutions need to be used in order to meet the security requirements of different types of workstations. TPM: Some vendors support TPMs combined with a second factor (PIN or password) to store keys and some do not. Determine what your strategy will be for authentication. If you decide that you want to use TPM, be aware that the same computer, sold in different parts of the world, could have a different configuration when it comes to cryptographic components. Some computers sold in China would not have the TPM. Apple computers do not include a TPM any more, so a hybrid solution might be required if you require cross-platform support. USB Storage Key: A USB storage key is another method of storing the key separately from the hard drive. Users will leave these USB storage keys in their laptop bags. Ensure your second factor is secure enough. Assume USB storage will be easier to copy than a TPM or a smart card. Password sync or just a password: A solution to avoid having users carry a USB stick or a smart card, and in the case of password sync, two different sets of credentials to get up and running. However, it involves synchronization as well as keyboard mapping issues. If using sync, it also means a simple phishing attack on a user’s domain account could lead to a stolen laptop being booted. Smart cards: More computers now include smart card readers than ever before. As with USB and TPM, this is a neat way of keeping the keys separate from the hard drive. Ensure you have a second factor such as a PIN in case someone loses the whole bundle together. Automatic booting: Most FDE solutions allow automatic booting for patch management purposes. While using it is often necessary, turning it on permanently would mean that everything needed to boot the computer is just one press of the power button away. Miscellaneous bits Depending on your environment, FDE on desktops can have value. However, do not rush to deploy it on workstations used by multiple users (meeting rooms, training, workstations used by multiple

Share:
Read Post

Scamables

A post at PCI Guru got my attention this week, talking about a type of rebate service called Linkables. They essentially provide coupon discounts without physical coupons: you get money off your purchases for promotional items after you pay, rather than at the register. All you have to do is hand over your credit card. Really. Linkables are savings offers that can be connected to your credit or debit card to deliver savings to you automatically after you shop. It’s a simple and convenient way to take advantage of advertisers’ online and offline promotions, with no coupons to clip and no paperwork after you shop. Offers can be used online and offline just by using your credit or debit card. This idea is not really novel. Affinity groups have been providing coupons, cash, and price incentives for… well, forever. And Linkables is likely selling your transactional data, but with the added bonus of not having to pay major card brands or banks for the information. Good revenue if you can get it. But there is a big difference for consumer security when someone like Visa embeds this type of third party promotional application on a smart card – where Visa maintains control of your financial information – and handing out your credit card. I know we are supposed to be impressed that they have a “Level 1 PCI certification” – the kind of certification that is “good until reached for” – but the reality that is we have no idea how secure the data is. Sure, we hand over credit cards to online merchants all the time, but the law provides some consumer protection. Will that be true if a third party like Linkables suffers a breach? There won’t be any protection if they lose you debit card number and your account is plundered. I would much rather hand over my password to a stranger for a candy bar than my credit card for 10 cents off dishwasher detergent, paid some time in the future. I can reset my password but I cannot reset stupid. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.