Securosis

Research

Friday Summary: August 23, 2013

With seven trips in the last eight weeks – and I would have been 8 for 8 had I not been sick one week – I’d have been out of the office the entire last two months. It almost feels weird blogging again but there is going to be a lot to write about in the coming weeks given the huge amount of research underway. Something really hit home the other day when I was finishing up a research project. Every day I learn more about computer security, yet every day – on a percentage basis – I know less about computer security. Despite continuous research and learning, the field grows what seems like an exponential rate. The number of new subject areas, threats and response techniques grows faster than any person can keep up with. I was convinced that in the 90s I could ‘know’ pretty much all you needed to know about computer security; that concept is now laughable. Every new thing that has electrons running through it creates a new field for security. Hacking pacemakers and power meters and vehicle computer is not surprising, and along with it the profession continues to grow far beyond a single topic to hundreds of sciences, with different distinct attack and defense perspectives. No person has a hope of being an expert in more than a couple sub-disciplines. And I think that is awesome! Every year there is new stuff to learn, both the ‘shock and awe’ attack side, as well as the eternally complex side of defense. What spawned this train of thought was Black Hat this year, where I saw genuine enthusiasm for security, and in many cases for some very esoteric fields of study. My shuttle bus on the way to the airport was loaded with newbie security geeks talking about how quantum computing was really evolving and going to change security forever. Yeah, whatever; the point was the passion and enthusiasm they brought to Black Hat and BSides. Each conversation I overheard was focused on one specific area of interest, but the discussions quickly led them into other facets of security they may not know anything about – social engineering, encryption, quantum computing, browser hacking, app sec, learning languages and processors and how each subsystem works together … and on and on. Stuff I know nothing about, stuff I will never know about, yet many of the same type of attacks and vulnerabilities against a new device. Since most of us here at Securosis are now middle-aged and have kids, it’s fun for me to see how each parent is dealing with the inevitability of their kids growing up with the Internet of Things. Listening to Jamie and Rich spin different visions of the future where their kids are surrounded by millions of processors all trying to alter their reality in some way, and how they want to teach their kids to hack as a way to learn, as a way to understand technology, and as a way to take control of their environment. I may know less and less, but the community is growing vigorously, and that was a wonderful thing to witness. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich on Threatpost- How I Got Here. I got to do my third favorite thing, talk about myself. Dave Mortman on Big Data Security Challenges. Mike’s DR column “Prohibition for 0-day Exploits”. Mike quoted in CRN about Proofpoint/Armorize deal. Favorite Securosis Posts Rich: The CISO’s Guide to Advanced Attackers. Mike’s latest paper is great. Especially because I keep having people thank me for writing it when he did all the work. And no, I don’t correct them. Adrian Lane: Hygienically Challenged. After 10 weeks of travel, I’m all too familiar with this element of travel. But after 3 days fishing and hiking in the Sierra’s I was one of these people. Sorry to the passengers on that flight. David Mortman: Research Scratchpad: Stateless Security. Mike Rothman: Lockheed-Martin Trademarks “Cyber Kill Chain”. “Cyberdouche” Still Available. A post doesn’t have to be long to be on the money, and this one is. I get the need to protect trademarks, but for that right you’ll take head shots. Cyberdouche FTW. Other Securosis Posts “Like” Facebook’s response to Disclosure Fail. Research Scratchpad: Stateless Security. New Paper: The 2014 Endpoint Security Buyer’s Guide. Incite 8/21/2013 — Hygienically Challenged. Two Apple Security Tidbits. Lockheed-Martin Trademarks “Cyber Kill Chain”. “Cyberdouche” Still Available. IBM/Trusteer: Shooting Across the Bow of the EPP Suites. New Paper: The CISO’s Guide to Advanced Attackers. Favorite Outside Posts Adrian Lane: Making Sense of Snowden. Look at my comments in Incite a couple weeks back and then read this. Chris Pepper: Darpa Wants to Save Us From Our Own Dangerous Data. Rich: Facebook’s trillion-edge, Hadoop-based and open source graph processing engine. David Mortman: Looking inside the (Drop) box. Mike Rothman: WRITERS ON WRITING; Easy on the Adverbs, Exclamation Points and Especially Hooptedoodle. Elmore Leonard died this week. This article he wrote for the NYT sums up a lot about writing. Especially this: “If it sounds like writing, I rewrite it.” Research Reports and Presentations The 2014 Endpoint Security Buyer’s Guide. The CISO’s Guide to Advanced Attackers. Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Top News and Posts Hackers for Hire. Bradley Manning Sentenced to 35 Years in Prison Declassified Documents Prove NSA Is Tapping the Internet ‘Next Big’ Banking Trojan Spotted In Cybercrime Underground How the US (probably) spied on European allies’ encrypted faxes Researcher finds way to commandeer any Facebook account from his mobile phone Blog Comment of the Week This week’s best comment goes to michael hyatt, in response to Research Scratchpad: Stateless Security. I think we’re working our way in that direction, though not as explicitly as you define it. But while

Share:
Read Post

API Gateways: Buyers Guide

We will close out this series by examining key decision criteria to help you select an API gateway. We offer a set of questions to determine which vendor solutions support your API technically, as well as the features your developers and administrators need. These criteria can be used to check solutions against your design goals and help you walk through the evaluation process. Nota bene: use cases first It is tempting to leap to a solution. After all, API development is a major trend, and security teams want to help solve API security problems. API gateways have been designed to enable developers to jump in quickly and easily. But there is no generic API security model good enough for all APIs. APIs are a glue layer, so the priorities and drivers are found by analyzing your API use cases: from what components you are gluing together, from what environment (enterprise, B2B, legacy, etc.), to what environment (mobile, Internet of Things, third-party developers, etc). This analysis provides crucial weighting for your priorities. Product Architecture Describe the API gateway’s deployment model (software, hardware only, hardware + software, cloud, or something else). Describe the scalability model. Does the API gateway scale horizontally or vertically? What connectors and adapters, to other software and cloud services, are included? How are new versions and updates handled? What key features do you believe your product has that distinguishes it from competitors? Access Provisioning and Developer Power Tools What credentials and tokens does the API gateway support for developers and API consumers? How is access governed? What monitoring, management, and metrics features does the gateway offer? Does the product offer client-side helper SDKs (iOS, Android, JavaScript, etc.) to simplify API consumer development? Describe a typical “day in the life” of a developer, from registering a new API to production operationalization. Describe out-of-the-box self-service features for registering new APIs. Describe out-of-the-box self-service features for acquiring API keys and tokens. Describe out-of-the-box self-service features for testing APIs. Describe out-of-the-box self-service features for versioning APIs. Describe how your API catalog help developers understand the available APIs and how to use them. Development What integration is available for source code and configuration management? For extending the product, what languages and tools are required to develop wrappers, adapters, and extensions? What continuous integration tools (e.g., Jenkins) does your product work with? Access Control How are API consumers authenticated? How are API calls from API consumers authorized? What level of authorization granularity is checked? Please describe where role, group, and attribute level authorization can be enforced. What out-of-the-box features does the API gateway have for access key issuance, distribution, and verification? What out-of-the-box features does the API gateway have for access key lifecycle management? What tools are used to define technical security policy? Describe support for delegated authorization. What identity server functionality is available in the API gateway? e.g., OAuth Authorization Server, OAuth Resource server, SAML Identity Provider, SAML Relying Party, XACML PEP, XACML PDP, … What identity protocol flows are supported, and what role does the API gateway play in them? Interoperability What identity protocols and versions are supported (OAuth, SAML, etc.)? What directories are supported (Active Directory, LDAP, etc.)? What application servers are supported (WebSphere, IIS, Tomcat, SAP, etc.)? What Service and Security gateways are supported (DataPower, Intel, Vordel, Layer7, etc.)? Which cloud applications are supported? Which mobile platforms supported? Security Describe support for TLS/SSL. Is client-side TLS/SSL (“2-way mutual authentication”) supported? How. Please describe the API gateway’s support for whitelisting URLs. What out-of-the-box functionality is in place to deal with injection attacks such as SQL injection? How does the product defend against malicious JavaScript? How does the gateway defend against URL redirect attacks? How does the gateway defend against replay attacks? What is the product’s internal security model? Is Role-Based Access Control supported? Where? How is access audited? Cost Model How is the product licensed? Does cost scale based on number of users, number of servers, or another criterion? What is the charge for adapters and extensions? This checklist offers a starting point for analyzing API gateway options. Review product capabilities to identify the best candidate, keeping in mind that integration is often the most important criterion for successful deployment. It is not as simple as picking the ‘best’ product – you need to find one that fits your architecture, and is amenable to development and operation by your team. Share:

Share:
Read Post

Database Denial of Service: Countermeasures

Before I delve into the meat of today’s post I want to say that the goal of this series is to aid IT security and database admins in protecting relational databases from DoS attacks. During the course of this research I have heard several rumors of database DoS but not found anyone willing to go on record or even provide details anonymously. Which is too bad – this type of information helps the community and helps reduce the number of companies affected. Another interesting note: we have been getting questions from network IT and application management teams rather than DBAs. In hindsight this is not so surprising – network security is the first line of defense and cloud database service providers (e.g., ISPs) don’t have database security specialists. Now let’s take a look at database DoS countermeasures. There is no single way to stop database DoS attacks. Every feature is a potential avenue for attack so no single response can defend against everything, short of taking the databases off the Internet entirely. The good news is that there are multiple countermeasures at your disposal, both detective and preventative, with most preventative security measures essentially free. All you need to do is put the time in to patch and configure your databases. But if your databases are high-profile targets you need to employ preventative and detective controls to provide reasonable assurances they won’t be brought down. It is highly unlikely that you will ever be able totally stop database DoS, but the following mitigate the vast majority of attacks: Configuration: Reduce the of attack surface of a database by removing what you don’t need – you cannot exploit a feature that’s not there. This means removing unneeded user accounts, communications protocols, services, and database features. A feature may have no known issues today, but that doesn’t mean none are awaiting discovery. Relational databases are very mature platforms, packed full of features to accommodate various deployment models and uses for many different types of customers. If your company is normal you will never use half of them. But removal is not easy and takes some work on your part, to identify what you don’t need, and either alter database installation scripts or remove features after the fact. Several database platforms provide the capability to limit resources on a per-user basis (i.e., number of queries per minute – resource throttling for memory and processors), but in our experience these tools are ineffective. As with the judo example in our earlier attack section, attackers use resource throttling against you to starve out legitimate users. Some firms rely upon these options for graceful degradation, but your implementation needs to be very well thought out to prevent them from impinging on normal operation. Patching: Many DoS attacks exploit bugs in database code. Buffer overflows, mishandling malformed network protocols or requests, memory leaks, and poorly designed multitasking have all been exploited. These are not the types of issues you or your DBA can address without vendor support. A small portion of these attacks are preventable with database activity monitoring and firewalls, as we will discuss below, but the only way to completely fix these issues is to apply a vendor patch. And the vendor community, after a decade of public shaming by security researchers, has recently been fairly responsive in providing patches for serious security issues. The bad news is most enterprises patch databases every 14 months on average, choosing functional stability over security, despite quarterly security patch releases. If you want to ensure bugs and defects don’t provide an easy avenue for DoS, patch your databases. Database Activity Monitoring: One of the most popular database protection tools on the market, Database Activity Monitoring (DAM) alerts on database misuse. These platforms inspect incoming queries to see whether they violate policy. DAM has several methods for detecting bad queries, with examination of query metadata (user, time of day, table, schema, application) most common. Some DAM platforms offer behavioral monitoring by setting a user behavior baseline to define ‘normal’ and alerting when users deviate. Many vendors offer SQL injection detection by inspecting the contents of the WHERE clause for known attack signatures. Most DAM products are deployed in monitor-only mode, alerting when policy is violated. Some also offer an option block malicious queries, either through an agent or by signaling a reverse proxy on the network. Database monitoring is a popular choice as it combines a broad set of functions, including configuration analysis and other database security and compliance tools. Database Firewalls: We may think of SELECT as a simple statement, but some variations are not simple at all. Queries can get quite complex, enabling users to do all sorts of operations – including malicious actions which can confuse the database into performing undesired operations. Every SQL query (SELECT, INSERT, UPDATE, CREATE, etc.) has dozens of different options, allowing hundreds of variations. Combined with different variables in the FROM and WHERE clauses, they produce thousands of permutations; malicious queries can hide in this complexity. Database firewalls are used to block malicious queries by sitting between the application server and the database. They work by both understanding legitimate query structures and which query structures the application is allowed to use. Database firewalls all whitelist and blacklist queries for fine-grained filtering of incoming database requests – blocking non-compliant queries.This shrinks the vast set of possible queries to a small handful of allowed queries. Contrast this against the more generic approach of database monitoring, alerting on internal user misuse, and detection of SQL injection. DAM is excellent for known attack signatures and suspect behavior patterns, but database firewalls reduce the threat surface of possible attacks by only allowing known query structures to reach the database, leaving a greatly reduced set of possible complex queries or defect exploitations. Web Application Firewall: We include web application firewalls (WAF) in this list as they block known SQL injection attacks and offer some capabilities to detect database probing. For the most part, they do not address database denial of service attacks, other than blocking specific queries or access to network ports external users should not see. Application and Database Abstraction

Share:
Read Post

Friday Summary: Cloud Identity Edition

One of my favorite industry events was last week, the 2013 Cloud Identity Summit. Last year’s was in Vail, Colorado, so I thought this year couldn’t top that. Wrong. This year was at the Mertiage in Napa – nice hotel, nice Italian restaurant, stunningly helpful staff, and perfect weather made for a great week. And while I was sorely tempted to tour the Napa Valley, I found the sessions too compelling to skip out. Here are a few of the highlights: AZA vs. KNOX: As I mentioned earlier this week, while 2012 centered on infrastructure and identity standards (OAuth, OpenID Connect, and SAML) to enable cloud services, 2013 focused on mobile client authentication and Single Sign-On. SSO is still the challenge, but now primarily for mobile devices, and that is not yet fully sorted. This is important because mobile security is itself an identity problem. These technologies give you a glimpse of where we are going after BYOD, MDM, and MAM. Between my KNOX vs. AZA mobile throwdown and Gunnar’s Counterpoint: KNOX vs. AZA throwdown we covered the high points of the discussion. WebDevification: An informal poll – okay, the dozen or so people I asked – felt Eve Mahler’s presentation was the best of the week. Her observations on the ‘webdevification’ trend that mashes third-party APIs, cloud, and mobile really hit the conference’s central themes. API gateways and authentication tools like OAuth that support that evolution, are turning traditional development paradigms on their ears. More importantly, from a security standpoint, they show that we can build security in without requiring developers to be security experts. Slow cloud IAM adoption curve: Like the cloud in general, adoption of IdaaS has been somewhat slow. But moving to IdaaS is conceptually daunting. I liken the change to moving from an Earth-centric to a sun-centric view of the solar system. With IAM we are moving from on-premise to a cloud-centric view of IT. Ping’s CEO Andre Durand did a nice job outlining the typical client maturity curve of SSO to SaaS integration to Federation to IdaaS, but the industry as a whole is still struggling at the halfway point. Why? Complexity and compliance. Complexity because federated identity has a lot of moving parts, and how we do fine-grained authorization and provisioning is still undecided. More worrisome is moving confidential data outside the enterprise without appropriate security and compliance controls. These controls and reports exist, but enterprises don’t trust them… yet. But Andre made a great point: We had the same reservations about email, but once we standardized the SMTP interface email became a commodity. The result was firms like Hotmail, and now most firms rely upon outsourced email services. 2FA on mobile: I Tweeted: “Am I still the only one who thinks mobile browser based 2FA is kludgy?” at CIS. Because SMS would be my first choice, but it is not available on all devices. HTTPS is a secure protocol available on all mobile platforms, so it’s a great choice. But my problem is not the protocol – it’s the browser. Don’t design a new security system around one of the most problematic products for security. XSS and CSRF still apply, and building new systems on top of vulnerable ones justs enables a whole new class of attacks. Better to find a secure way to pass challenge to mobile devices – otherwise use thumbprints, eyeball scans, voice, or facial recognition instead. FIDO: Due to the difficulties standardizing authorization on different mobile platforms, the FIDO alliance, which stands for Fast IDentity Online, is developing an open user authentication standard. I hadn’t paid close attention to this effort before the conference, but what they presented was a sensible approach to minimum requirements to authenticate a user on a mobile device. Befitting the conference theme, their idea is to minimize use of passwords, enable easier/better/faster authentication, and help the community link cloud services together. This is one of the few clean and simple identity standards I have see so I recommend taking a quick look. CIS is still a young conference, and still very developer-centric, which I find refreshing. But the amazing aspect is that it’s a family event: of 800 people, about 200 were wives and children of attendees. Each night a hundred-plus kids played right alongside the evening festivities. This is the only ‘community’ trade event I have been to that is actually building a real community. I highly recommend CIS if you are interested in learning about the cutting edge of identity and authorization. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike’s DR post on Controlling the Big 7. Favorite Securosis Posts Adrian Lane The Temptation of the Developer. A scarier “insider threat”. David Mortman: Intel Software Guard Extensions (SGX) Is Mighty Interesting. Mike Rothman: Counterpoint: KNOX vs. AZA Throwdown. Great research (or anything really) requires an idea, and then smart folks to poke holes in it to make it better. It was great to see Gunnar make great counterpoints to Adrian’s post, which was also great. That’s why we hang out with smart guys: they make us smarter. Rich: PCI Standards Flow Downstream. Ah, PCI. Other Securosis Posts Google may offer client-side encryption for Google Drive. Incite 7/17/2013: 80 años. Favorite Outside Posts David Mortman: How Experts Think. Mike Rothman: Dropbox, WordPress Used As Cloud Cover In New APT Attacks. Hiding in plain sight. With cloud services aplenty we will see much more of this – which makes detection that much harder. Adrian: Malware Hidden Inside JPG EXIF Headers. There are too many ways to abuse users through browsers. Rich: Kali Linux on a Rasberry Pi. Years ago I struggled to get Metasploit running on my wireless router as part of my DEFCON research. I never pulled it off, but this sure would have made life easier. Research Reports and Presentations Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments.

Share:
Read Post

PCI Standards Flow Downhill

Payment gateways and payment processors have to pass PCI requirements just like merchants do. And they don’t like it any more than you do, as evidenced by recent post by Stephen Ames of Shift4. He is pissed about a new interpretation of PA-DSS, provided to his QSA outside the officially published guidance and standards, which places PA-DSS section 4.2.7 always in scope. From the post: However, my PA-QSA stated that PA-DSS Requirement 4.2.7 is now always in scope, regardless of whether or not there is a user database within the application. … I’ve searched the PA-DSS for a security requirement that aligns with PCI DSS 11.5 – File Integrity Monitoring – and there are none. I’m certain that most application vendors would not take responsibility for file integrity monitoring at merchant sites. And I’m unable to understand why the SSC is forcing that upon application vendors, when they don’t even have that requirement written into the PA-DSS. I searched the PCI FAQ database and found no reference to a reinterpretation of PA-DSS Requirement 4.2.7 requiring vendors to take responsibility for file integrity monitoring of their PA-DSS applications running in merchant environments. Once again, PA-DSS Requirement 4.2.7 aligns with DSS Requirement 10.2 and user access, not DSS Requirement 11.5. … and … “The SSC sends out compliance guidance to the assessor community.” … it now appears the PCI SSC has fallen back into its old ways of keeping participating organizations in the dark. While file activity monitoring – and database activity monitoring as well – are often used as compensating controls for PCI-DSS section 10.2, they are not prescribed in the standard. But rather than accept an ‘always-on’ requirement – and what policies would be appropriate without a database to monitor? – Mr. Ames is trying to engage the community to devise a rational policy for when to apply monitoring and when not to. But Stephen is not going to get a better response than “those assessors are drinking the PCI Kool-Aid”. No matter whether his arguments make sense or not. They cannot. Several assessors I know have received phone calls from the PCI Council after writing blog posts or comments that interpreted – or worse, ameliorated – PCI scripture. They were reminded that they must always frame the PCI standard in a positive light or forfeit their ability to remain an assessor. So no frank public discussion will take place. This sort of thing has been going on for a long time without signs of getting better. The PCI publishes the PCI standards, which are insulated from public critique by the mandatory requirements signed by assessors and participating organizations. So even the most knowledgable parties who advised the council can’t speak out because that would break their agreements! That’s why, when things like non-guidance guidance are published, there is little subsequent discussion. By design, information only flows in one direction: downhill. Share:

Share:
Read Post

FireStarter: KNOX vs. AZA mobile throwdown

A group of us were talking about key takeaways for the 2013 Cloud Identity Summit last week in Napa. CIS 2012 focused on getting rid of passwords; but the conversation centered on infrastructure and identity standards such as OAuth, OpenID Connect, and SAML, which provide tool to authenticate users to cloud services. 2013 was still about minimizing usage of passwords, but focused on the client side where the “rubber meets the road” with mobile client apps. Our discussion highlighted different opinions regarding the two principal models presented at the conference for solving single sign-on (SSO) issues for mobile devices. One model, the Authorization Agent (AZA) is an app that handles authentication and authorization services for other apps. KNOX is a Samsung-specific container that provides SSO to apps in the container. It’s heartening to hear developers stress that unless they get the end user experience right, the solution will not be adopted. No disagreement on that but buyers have other issues of equal importance, and I think we are going to see mobile clients embrace these approaches over the next couple years so it is worth discussing the issues in an open public forum. So I am throwing out the first pitch in this debate. Statement I believe the KNOX “walled garden” mobile app authentication model offers a serious challenge to Authorization Agents (AZA) – not because KNOX is technically superior but because it provides a marginally better user experience while offering IT better management, stronger security, and a familiar approach to mobile apps and data security. I expect enterprises to be much more comfortable with the KNOX approach given the way they prefer to to manage mobile devices. I am not endorsing a product or a company here – just saying I believe the subtle difference in approach is very important to the buyers. Problem User authentication on mobile devices must address a variety of different goals: a good user experience, not passing user IDs and passwords around, single sign-on, support for flexible security tokens, Two-Factor Authentication (2FA) or equivalent, and data security controls – just to name a few. But the priority is to provide single sign-on for corporate applications on mobile devices. Unfortunately the security model in most mobile operating systems is primarily intended to protect apps for other apps, so SSO (which must manage authentication for multiple other apps) is a difficult problem. Today you need to supply credentials for every app you use, and some apps require re-authentication whenever you switch between apps. It gets even worse if you use lengthy passwords and a password manager – the process looks something like this: You start the app you need to run, bounce over to the password manager, log into the password manager, grab credentials, bounce back to the original application, and finally supply credentials (hopefully pasting them in so you don’t forget or make an invisible typo). At best case it’s a pain in the ass. Contrasting Approaches Two approaches were discussed during CIS 2013. I will simplify their descriptions, probably at the expense of precision, so please comment if you believe I mischaracterized either solution. First, let’s look at the AZA workflow for user authentication: The AZA ‘agent’ based solution is essentially an app that acts as a gateway to all other (corporate) apps. It works a bit like a directory listing, available once the user authenticates to the AZA agent. The workflow is roughly: a. The app validates the user name and password (1.). b. The app presents a list of apps which have been integrated with it. c. The user selects the desired app, which requests authentication tokens from an authorization server (2.). d. The tokens enable the mobile application to communicate with the cloud service (Box, Evernote, Twitter, etc). If the service requires two-factor authentication the user may be provided with a browser-based token (3.) to supplement their username and password. e. The user can now use the app (4.). For this to work, each app needs to be modified slightly to cooperate with the AZA. KNOX is also an agent but not a peer to other apps – instead it is a container that manages apps. The KNOX (master) app collects credentials similarly to AZA, and once the container app is opened it also displays all the apps KNOX knows about. The user-visible difference is that you cannot go directly to a corporate app without first validating access to the container. But the more important difference for data security is that the container provides additional protection to its apps and stored data. The container can verify stack integrity, where direct application logins do not. KNOX also requires apps be slightly modified to work within in the container, but it does not require a different authentication workflow. User authentication for KNOX looks like this – but not on iOS: Rationale Both approaches improvement on standalone password managers, and each offers SSO, but AZA is slightly awkward because most users instinctively go directly to the desired app – not the AZA service. This is a minor annoyance from a usability standpoint but a major management issue – IT wants to control app usage and data. Users will forget and log directly into productivity apps rather than through the AZA if they can. To keep this from happening AZA providers need the app vendor to alter their apps to a) check for the presence of an AZA, b) force users through the AZA if present, and c) pass user credentials to the AZA. The more important issue is data security and compliance as drivers for mobile technologies. The vast majority of enterprises use Virtual Desktop Infrastructure (VDI) to manage mobile data and security policy, and the KNOX model mirrors the VDI model. It’s a secure controlled container, rather than a loosely-coupled federation of apps linked to an authorization agent. A container provides a clear control model which security organizations are comfortable with today. A loose confederation of applications cannot guarantee data security or policy enforcement the way containers can. One final point on buying centers: buyers do not look for the ‘best’

Share:
Read Post

API Gateways: Implementation

APIs go through a software lifecycle, just like any other application. The purchaser of the API develops, tests, and manages code as before, but when they publish new versions the API gateway comes into play. The gateway is what implements operational polices for APIs – serving as a proxy to enforce security, application throttling, event logging, and routing of API requests. Exposing APIs and parameters, as the API owner grants access to developers, is a security risk in and of itself. Injection attacks, semantic attacks, and any other way for an attacker to manipulate API calls is fair game unless you filter requests. Today’s post will focus on implementation of security controls through the API gateway, and how the gateway protects the API. Exposing APIs What developers get access to is the first step in securing an API. Some API calls may not be suitable for developers – some features and functions are only appropriate for internal developers or specific partners. In other case some versions of an API call are out of date, or use of internal features has been deprecated but must be retained for limited backward compatibility. The API gateway determines what a developer gets access to, based on their credentials. The gateway helps developers discover what API calls are available to them – with all the associated documentation, sample scripts, and validation tools. But behind the scenes it also constricts what each developer can see. The gateway exposes new and updated calls to developers, and acts as a proxy layer to reduce the API attack surface. The gateway may expose different API interfaces to developers depending on which credentials they provide and the authorization mapping provided by the API owner. Most gateway providers actually help with the entire production lifecycle of deployment, update, deprecation, and deletion – all based on security and access control settings. URL whitelisting We define ‘what’ an application developer can access when we provision the API – URL whitelisting defines ‘how’ it can be used. It is called a ‘whitelist’ because anything that matches it is allowed; unmatching requests are dropped. API gateways filter incoming requests according to the rules you set, validating that incoming requests meet formatting requirements. This checking catches and stops some mistakes; the API gateway’s security prevents some mistakes from proceeding by preventing use of unauthorized requests. This may be used to restrict which capabilities are available to different groups of developers, as well as which features are accessible to external requests; the gateway also prevents direct access to back end services. Incoming API calls run through a series of filters, checking general correctness of request headers and API call format. Calls that are too long, have missing parameters, or otherwise clearly fail to meet the specification are filtered out. Most whitelists are implemented as series of filters, which allows the API owner to add checks as needed and tune how API calls are validated. The owner of the API can add or delete filters as desired. Each platform comes with its own pre-defined URL filters but most customers create and add their own. Parameter parsing (Injection attacks: XML attacks JSON attacks CSRF) Attackers target application parameters. This is a traditional way to bypass access control and gain unauthorized access to back-end resources. API gateways also provide capabilities to examine user-defined content. “Parameter parsing” is examination of user-supplied content for specified attack signatures – they may identify attacks or API misuse. Content inspection works much like a ‘blacklists’ to identify known malicious API usage. Tests typically include regular expression checks of headers and content for SQL injection and cross-site scripting. Parameters are checked sequentially, one rule at a time. Some platforms provide means to programmatically extend checking, altering both which checks are performed and how they are parsed, depending on the parameters of the API call. For example you might check the contents of an XML stream for both structure and to ensure that it does not contain binary code. API gateways typically provide packaged policies for content signature of know malicious parameters, but the owner API determines which policies are deployed. Our next post will offer a selection guide – with specific comments on deployment models, evaluation checklists, and key technology differentiators. Share:

Share:
Read Post

RSA Acquires Aveksa

EMC has announced the acquisition of Aveksa, one of the burgeoning players in the identity management space. Aveksa will be moved into the RSA security division, and no doubt merged with existing authentication products. From the Aveksa blog: … business demands and the threat landscape continue to evolve, and organizations now expect even more value from IAM platforms. As a standalone company, Aveksa began this journey by connecting our IAM platform to DLP and SIEM solutions – allowing organizations to connect identity context, access policies, and business processes to these parts of the security infrastructure. This has been successful, and also led us to recognize the massive and untapped potential for IAM as part of a broader security platform – one that includes Adaptive Authentication, GRC, Federation, and Security Analytics. At first blush it looks like RSA made a good move, identifying their weakest solutions areas and acquiring a firm that provides many of the missing pieces they need to compete. RSA has been trailing in this space, focusing most of its resources on authentication issues and filling gaps with partnerships rather than building their own. They have been trailing in provisioning, user management, granular role-based access, and – to a lesser extent – governance. Some of RSA’s recent product advancements, such as risk-based access control, directly address customer pain points. But what happens after authentication is the real question, and that the question this purchase is intended to answer. Customers have been looking for platforms that offer the back-end plumbing needed to link together existing business systems, and the Aveksa acquisition correctly targets the areas RSA needs to bolster. It looks like EMC has addressed a need with a proven solution, and acquired a reasonable customer base for their money. We expect to see move moves like this in the mid-term as more customers struggle to coalesce authentication, authorization, and identity management issues – which have been turned on their heads by cloud and mobile computing demands – into more unified product suites. Share:

Share:
Read Post

Proactive WebAppSec

Earlier this week rsnake blogged about the Top 10 Proactive Web Application Security Measures. He has a very good set of recommendations, a highly recommended read for web application developers and webmasters alike: anti-CSRF cryptographic nonces on all secure functions: We recommend building nonces (one time tokens tied to user sessions) into each form and validating that to ensure that your site can’t be forced to perform actions. This can be a huge pain to retrofit because it means touching a database or shared memory on every hit – not to mention the code that needs to be inserted into each page with a form and subsequent function to validate the nonce. … DAL (data/database access layer): DALs help to prevent SQL injection. Few companies know about them or use them correctly, but by front ending all databases with an abstraction layer many forms of SQL injection simply fail to work because they are not correctly formed. DALs can be expensive and extremely complex to retrofit because every single database call requires modification and interpolation at the DAL layer. What I appreciate most is that its recommendations are direct, development-centric responses to security issues with web apps. Unfortunately most companies don’t think critically and ask, “How should I solve this web security problem?” The more common approach is just to wonder, “What’s the cheapest, fastest way to minimize the issue?” That is not necessarily wrong, but that difference in mindset is why most customers go for bolt-on partial solutions, and it will probably prevent people from embrace these sound recommendations. Rsnake stresses that these ideas are best implemented before deployment, but I argue that agile web application development teams can still retrofit these ideas without too much pain. I will drill into a few of these recommendations in coming posts, where I have been fortunate enough to have implement the ideas in previous companies and I can offer advice. A couple of his recommendations are far outside the norm. I am willing to bet you have never encountered database abstraction layers for security, and while you have probably heard of immutable logs and security frameworks in source code, you have probably never used them. That’s because you are probably using WAF, DAM, log management, and piecemeal SQLi protection. The depth of rsnake’s experience could fill volumes – he is pulling from years of experience to hit only the highlights – and these recommendations warrant more discussion. The recommendations themselves are really good, and the tools are not that difficult to build – the difficulty are in the management and deployment considerations. Share:

Share:
Read Post

Database Denial of Service: Attacks

Today’s post will discuss database denial of service attacks so later we can consider how to stop them. From the security researcher’s perspective I cannot help but be impressed by the diversity of database DoS attacks. Many such attacks are pretty dumb – they seem to be written by a person who does not understand SQL, writing horrible queries that are the opposite of efficient. Some exploits are so simple – yet clever – that we are amazed the targeted vulnerability was not found in quality assurance tests. But dumb or not, these attacks are effective. For example you could start a couple different searches on a website, choose a very broad list of values, and hit ‘search’. The backend relational system starts to look at every record in every table, chewing up memory and waiting for slow disk reads. Let’s look more closely at a couple different classes of denial of service attacks: Abuse of Functions The abuse of database functions is, by my count of reported vulnerabilities related to DoS, the single most common type of database DoS attack. There have been hundreds, and it seems like no externally accessible feature is safe. This class of attack is a bit like competitive judo: as you shift your weight in one direction, your opponent pushes you in the same direction to make you lose your balance and fall over. A judo expert will use your weight against you, just like an attacker uses database features against you. For example, if you implement restrictions on failed logins attackers may try bad passwords until they lock all legitimate users out. If you implement services to automatically scale up to handle user requests attackers can scale the database up until it collapses under its own weight, or the bill becomes ruinous, or you hit a billing threshold and service is shut down. There is no single attack vector, but there is a whole range of ways to misuse database features. This class of attacks is essentially an attacker getting a database function to misbehave. Typically it occurs when a database command confuses the database, the query parser, or a sub-function enough to lock up or crashes. Relational databases are complex gestalts of many interdependent processes, so the loss of a single service can often cause the entire database to grind to a halt. One example is an attacker sending malformed Remote Procedure Calls, incomprehensible to the parser, which cause it to simply stop. Malformed XML and TDS calls have been used the same way, as well as SNMP queries. Pretty much every database communication protocol has, at one time or another, been fooled or subverted by requests that are formatted correctly but violate the expectations of the database developers in some way that causes a problem. SQL injection is the best known type of functional abuse: SQL strings are bound into a variable passed to the database, so the database processes a very different query than was expected. SQLi is not typically associated with DoS – it is more often employed as a first step in a database takeover because most attackers want to control the database but don’t want to be detected, but it SQL injection works and is used in both ways. Back to judo: every feature you have can be used against you. Complex Queries Complex queries are an attack class that works by giving the database too much work to do. Attackers find the most resource-intensive process accessible, and kick off a few dozen requests. Computed columns and views: Computed columns are virtual columns, typically created from the results of a query, and usually stored in memory. A view is a virtual table, the contents of which are also derived from a query. If the query selects a large amount of data the results occupy a large space in memory. And if the column or view is based on a complex query, it requires significant processing power to create. Exposure of computed columns and views has been the source of database DoS attacks in the past, with attackers continually refreshing views to slow down the database. Nested queries & recursion: Recursion is when a program calls itself; each time it recreates declared parameters or variables in memory. A common attack is to place a recursive call within a cursor FOR loop; after a few thousand iterations the database runs out of cursors or memory and comes to a halt. The IN operator: This operator tests whether a supplied variable matches any value within a set. The operation itself is very slow – even if the number of values to be compared is small. An attacker can inject the IN operator into a query to compare a large set of values against a variable that never matches. This is also called the snowflake search because it is like attempting to match two unique snowflakes, but the database continues to search regardless. Cartesian products and joins: The JOIN operation combines rows from two or more tables. A cartesian product is the sum of all rows from all tables specified in the FROM clause. Queries which calculate cartesian products on a few large tables generate huge result sets – possibly as large as the entire database. Any operations on a cartesian product can overwhelm a database. User defined functions: Similar to computed columns and views, any user-defined function gives an attacker carte blanche to abuse the database with whatever query they choose. Attackers leverage any of the above complex queries. Attackers attempt to exploit any complex query they can access. All these abuses are predicated on the attacker being able to inject SQL into some part of the database or abuse legitimate instances of complex operations. A couple complex queries running in parallel are enough to consume the majority of a database platform’s resources, badly slowing or completely stopping the database. Bugs and Defects Attackers exploit defects by targeting a specific weakness or bug in the database. To succeed the attacker needs to know or guess the type of database in use, and must know or learn of a weakness in code or design that can

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.