Securosis

Research

API Gateways: Key Management

For developers one of the most visible API gateway operations is key management. But dear reader this is not your father’s key management – the kind laden with X.509, PKI, and baroque foofaraw that security teams had to beg developers to implement. This is 2013 and the keys are OAuth access keys! And developers are asking us for the keys too, so what should we do? Before we answer that question, for those of you who are not programmers, let’s describe these “access keys” in a little detail. OAuth is a method for authorizing clients (end users and client applications) to use the third party APIs served by the API gateway. It is essentially how developers give access to consumers without consumers needing to share information such as user name and password. OAuth relies upon a trusted identity service to vouch for the client and pass an authorization token to the API, which in turn gives the client access. OAuth enables four parties (a user or consumer, a client application created by a third-party developer, the owner of the APIs, and an identity service provider such as Google or Facebook) to cooperate onto deliver services. As we have discussed, developers are not much keener on the theoretical underpinnings of different identity protocols than the consumers who use their applications. They just want to get their users access to the application so they can move on to more ‘meaningful’ development tasks – like building the client application itself… This shifts the responsibility of identity and authorization onto security teams, which is a new position for them to be in: managing the process instead of cleaning up afterward. Rather than engaging toward the end of a project to conduct a vulnerability assessment, security teams may select identity protocols to be used, establish identity requirements, and guide developers through the process of building them into their applications. This is an unusual collaboration between developers and security – in both degree and kind. The role of the security team as leader for a portion of the development process sets them up as a true design and development partner. Key setup & distribution Setting up keys can be handled in several different ways, but the process is typically initiated through self-service features of the gateway (we told you it’s not your father’s PKI). The developer registers their application and client(s). The steps of the OAuth protocol dance vary by implementation, but the core generally includes: Developer account: A master account for the developer, which could span multiple clients and services Client ID: The key that identifies the consumer and grants access Client secret: How the consumer authenticates Client types: Gateways use these to distinguish between different clients such as iOS and Android Resource: The URLs, redirects and other services the client is requesting access to Once this bootstrap process is complete – whatever variation your API gateway uses – the client application developer should have everything they need. Once the client has their authorization access token they are able to call the APIs and access data with their token. Each subsequent call to the APIs protected by the API gateway includes an OAuth access token. The tokens are passed along with every call from the client app to the API so the API can make access control decisions. This brings up an important part of OAuth’s value proposition: the process of acquiring a token and using a token are kept separate. One implication is that the enterprise security architect must ensure that though these two independent processes – token issuance and token usage – are separate, their policy and governance models are consistent. Users should only be allowed access to the APIs they are authorized for, and not to see other APIs or other users’ data. The access rights requested at token issuance must match runtime behavior. Key verification services Developers may not be that interested in identity protocols but they are all interested in whether their code works. Distributed applications are notoriously difficult to debug, so anything fundamental to operations must be tested. Once access keys are issued and ready for use the API gateway should offer testing tools to ensure there are no surprises at runtime. The API provider should actively help validate the client code to protect their API! There are a number of considerations: Ensure a production-like system is available for testing. Any networked application must deal with a myriad of issues such as ports, routing, and redirects. A token cannot simply be appended to access and refresh requests – each variant of API usage requires its own test cases. Make simple tools available – many APIs include simple cURL scripts to test applications. For example: “curl https://example.com/API/myservice -H ‘Authorization: your OAuth access token’” The gateway should include several scripts to validate client usage of the API. Provide documentation and guidance for more testing and debug functionality as needed for the client environment. Key lifecycle management OAuth isn’t magic security dust, and using it doesn’t make an application secure. API developers and consumers need to be clear on safe handling of OAuth tokens across their entire lifecycle. Some rules are straightforward, such as always use TLS/SSL. But most are context dependent, such as secure storage for tokens and safe handling of redirects. Two operations that generally require special attention in security policy are refresh and revocation. OAuth access tokens provide shorter-lived access but can create long-lived sessions through with refresh tokens. The refresh token is effectively a protection against an access token being replayed. So each consumer may have two different types of tokens. Security policy makers should align these policies and make use of the separation between shorter-lived access tokens and longer-lived refresh tokens. Policy is not as simple as “one and done”. In addition to refreshing sessions, access revocation requires consideration. Token revocation may seem minor but anyone who has lost their mobile device can say with authority that it is nice to be able to log into twitter.com and turn off access to your lost mobile phone so its clients no longer

Share:
Read Post

Friday Summary: June 28, 2013—“Summer’s here” edition

Normally by this time of year things slow down, people go on vacation, and we get to relax a bit, but not this year. At least not for me. It has been seven days a week here for a while, playing catch-up with all the freakin’ research projects going on. And I have wanted to comment on a ton of news items, but have not had the time. So this week’s summary consists of comments on a few headlines I have not had any other the chance to comment on. Here we go: All I can think about when I read these stories on NSA spying and Snowden news items: It is criminal for you, the public, to know our secrets. But it’s totally okay for us to spy on you. Nothing to worry about. Move along now. Love Square. Great product. Disruptive payment medium. But it has been reported they want to create a marketplace to compete with eBay, Amazon and – my interpretation, not something they have stated – craigslist. So let me ask you: Are they friggin’ nuts? Speaking of crazy, why would anyone claim HP is too late to enter the big data race? Has their tardiness in rolling out big data or big-data-like technologies hurt them in the SIEM space? No question. But general big data services is a very new market, and the race for leadership in packaged services has not even begun yet. Was I the only one shocked to learn RSA’s call for papers started this week? WTF? Didn’t I just get back from that conference? We are still a month away from Black Hat. It is currently 109F here in Phoenix, and all I want to do is find a cold beer and keep out of the heat. This just does not feel like the time to be thinking about presentation outlines… But if you want to present next February consider this a friendly reminder. For those three of you who have been emailing me about passwords and password managers because of my comments during the Key Management webcast last week, it’s okay. We will continue to use passwords here and there. I like password managers. Corporate and personal. I use them every day. But passwords will be replaced by tokens and identity certificates for Internet services because a) identity tokens allow us to do much more with identity and authorization than we can with passwords, and b) tokens remove the need to store password hashes on the server. Which is a another way of saying passwords can’t do what certificates do. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s white paper on 10 Common Database Vulnerabilities. Mike’s DR Post: The Slippery Slope Of Security Invisibility. Rich’s DR Post: Security Needs More Designers, Not Architects. Adrian’s Dark Reading post Database Configuration Standards. Adrian’s Key Management webcast. Rich’s Macworld article on Apple’s Security Strategy. It’s older, but I just saw Mike’s Security Myth-busting video and it’s funny. Favorite Securosis Posts Rich: Adrian on SQLi. He gets a little pedantic, but that’s what we love about him. Mike Rothman: Security Analytics with Big Data: Deployment Issues. Adrian did a fantastic job with this series. Read all the posts and learn about the future of SIEM… Adrian Lane: Top 10 Stupid Sales/Press/Analyst Presentation Tricks. We see stupid human tricks every week and I don’t think most companies understand how they or their slide decks are perceived. Other Securosis Posts Database Denial of Service [New Series]. API Gateways: Developer Tools. iOS 7 Adds Major Data Security Improvements. Incite 6/26/2013: Camp Rules. The Black Hole of DLP. Automation Awesomeness and Your Friday Summary (June 21, 2013). Full Disk Encryption (FDE) Advice from a Reader. Scamables. Talking Head Alert: Adrian on Key Management. How China Is Different. Microsoft Offers Six Figure Bounty for Bugs. Project Communications. Network-based Malware Detection 2.0: Deployment Considerations. Favorite Outside Posts Adrian Lane: Data Leakage In A Google World. People forget that Google is a powerful tool, which often finds data companies did not want exposed. It’s a tool to hack with, and yes, a tool to phish with. Chris Pepper: Solaris patching is broken because Oracle is dumb and irresponsible. Feh. Mike Rothman: Wences Casares: Teach Your Children to be Doers. Great post here by a start-up CEO about how to teach your kids to get things done. If only all those “entitlement kids” got a similar message from their parents. Dave Lewis: Opera Software Hit by ‘Infrastructure Attack’; Malware Signed with Stolen Cert Rich: TheStreet on Brian Krebs. I think it’s awesome that Brian is doing so well – he writes circles around everyone else on the cybercrime beat. Needless to say, we are fans of the low-overhead direct model. Seems to be working for us at least. Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Oracles releases critical security update for Java, Apple follows suit. The DEA Seized Bitcoins In A Silk Road Drug Raid. Turkey seeks to tighten control over Twitter. Why Snowden Asked Visitors in Hong Kong to Refrigerate Their Phones. Snowden distributed encrypted copies of NSA docs around the world. Pentagon’s failed flash drive ban policy: A lesson for every CIO. U.S. Surveillance Is Not Aimed at Terrorists. Attackers sign malware using crypto certificate stolen from Opera Software. Software Flaw Threatens LG Android Smartphones. South Korean cyberattacks. Researcher nets $20K for finding serious Facebook flaw. Vast majority of malware attacks spawned from legit sites. More from Google’s Safe Browsing disclosures. Google Adds Malware and Phishing Data to Transparency Report. HP Confirms Backdoor In StoreOnce Backup Product Line. Blog Comment of the Week This week’s best comment goes to Guillaume, in response to iOS 7 Adds Major Data Security Improvements. The share sheet thing is pretty

Share:
Read Post

Database Denial of Service [New Series]

We have begun to see a shift in Denial of Service (DoS) tactics by attackers, moving up the stack from networks to servers and from servers to the application layer. Over the last 18 months we have also witnessed a new wave of vulnerabilities and isolated attacks against databases, all related to denial of service. We have seen recent issues with Oracle with invalid object pointers, a serious vulnerability in the workload manager, the TNS listener barfing on malformed packets, a PostgreSQL issue with unrestricted networking access that was rumored to allow file corruption to crash the database, the IBM DB2 XML feature, and multiple vulnerabilities in MySQL including remote ability to crash the database. A vulnerability does not mean that exploitation has occurred but we hear more off-the-record accounts of database attacks. We cannot quantify the risk or likelihood of attack, but this seems like a good time to describe these attacks briefly and offer some mitigation suggestions. It may come as a surprise but database denial of service attacks have been common over the last decade. We don’t hear much about them because they are lost among the din of SQL injection (SQLi) attacks, which cause more damage and offer attackers a wider range options. All things being equal, attackers generally prefer SQLi attacks as more directly useful for their objectives. Database DoS doesn’t make headlines compared to SQLi, because injection attacks often take control of the database and can be more damaging. But interruption of service is no longer a trivial matter. Ten years ago it was still common practice to take a database or application off the Internet while an attack was underway. But now web services and the databases are tied into them are critical business infrastructure. Take down a database and a company loses money – quite possibly a lot of money. As Mike noted in his recent research on Denial of Service attacks, the most common DoS approaches are “flooding the pipes” rather than “exhausting the servers”. Flooding the pipes is accomplished by sending so many network packets that they simply overwhelm the network equipment. This type of volumetric attack is the classic denial of service, most commonly performed as a Distributed Denial of Service (DDoS) because it takes hundreds or thousands of malicious clients to flood a large network. Legitimate network traffic is washed away in the tide of junk, and users cannot reach servers. Exhausting servers is different – these attacks target software running on the server, such as the operating system or web application components – to waste all its CPU, memory, or other resources and effectively disable it. These attacks can target either vulnerabilities or features of application stacks to overwhelm servers and prevent legitimate traffic from accessing web pages or completing transactions. The insidious part of this for attack is that, as you consume more than roughly 80% of hardware or software resources, these platforms become less efficient. The closer they get to maximum utilization the more they slow down. Push them to the limit and they may simply lock up, waiting for resources to become available. In some cases a reduction in load does not bring servers back – you need to reset or restart them. Databases have their own networking features and offer a full complement of services, so both these models apply. The motivation for attacks is very similar to traditional DoS attacks. Hacktivism is a major trend, and taking down a major commercial web site is a weapon for people who dislike a company but lack legal or financial means to voice their complaints. “Covering attacks” are very common, where criminals flood servers and networks – including security systems – in order to mask an ongoing attack. common scenarios include shutting down a competitor, criminal racketeers threatening DoS and demanding ransom, and financial trading manipulation, and the list goes on. The motivations behind database DoS are essentially the same. The current tactics are a response to a couple new factors. Network and server defenses are getting better with the next generation of firewall technologies, and it has gotten nearly impossible to DoS cloud services providers with seemingly limitless redundant, and geographically dispersed resources. Attackers are looking for new ways to keep old crimes profitable. But attackers are not discriminatory – they are happy to exploit any piece of hardware or software that allows them to accomplish their attacks, including web applications and databases sitting atop servers. Database denial of service is conceptually no different than traditional DoS attacks at the sever or application layers, but there are many more clever ways to create a denial of service attack against a database. Unlike DDoS you don’t need to throw everything including the kitchen sink at a site – often you just need to find a small logic flaw in a database function to push it over. Relational database platforms are some of the most complex application platforms in existence so there is a lot of room for mischief. Attackers sometimes morph traditional protocol and server based denial of service attacks to move up the stack. But in most cases they exploit specific database features in novel ways to take down their targets. Current defensive systems are geared to block DoS-based network flooding and server attacks, so attackers are seeking greener fields in the application layer to better blend their incursions with legitimate customer transactions. With protection resources poured into the lower layers, relatively little is done at the application layer, and virtually nothing to stop database attacks. Worse, application layer attacks are much more difficult to detect because most look like legitimate database requests! Our next post will take a look at the different classes of database DoS attacks. I will look at some historic examples of database DoS attacks and discuss current ones to help you understand the difficulty of defending databases from DoS. Share:

Share:
Read Post

API Gateways: Developer Tools

Our previous post discussed the first step in the development process: getting access to the API gateway through access provisioning. Now that you have access it’s time to discuss how the gateway supports your code development and deployment processes. An API gateway must accomplish two primary functions: help developers build, test, and deploy applications; and help companies control use of their API. They are part development environment and part operational security tool. API Catalog The APIs catalog is basically a menu of APIs, services, and support services that provide developers front-end integration to access back-office applications, external APIs (for mashups), data and related services, along with all the supporting tools to build and deploy applications. Catalogs typically include APIs, documentation, coding help, build tools, configuration requirements, testing tools, guidance, and sample code for each supported function. They offer other relevant details such as network controls, access controls, integration options, orchestration, brokering and messaging options – all bundled into a management interface for selecting and configuring the services you want. Developer time is expensive so anything that streamlines this process is a win. Security controls such as identity protocols are notoriously difficult to fully grasp and implement. If your security architects want developers to “do it right”, this is the place to invest time to show them how. Traditionally security tools are bolted onto – or in front of – applications, generating howls of displeasure from developers who don’t want the added complexity nor performance impact. With third-party APIs things are different, as security is part of the core value. API gateways offer features than enable network, interface, and data security as part of the core feature set. For example it is faster and easier to enable built-in SAML or OAuth identity services than to build them from scratch – or worse to build a password management system. Even better, the features are available at design time, before you assemble the application, so they can be bundled into the development process. Reference implementations are extremely helpful. For example, consider OAuth: if you look at 10 different companies’ OAuth implementations you will probably find a dozen different implementations. Don’t assume developers will just figure it all out – connect the dots. To have a chance at a secure deployment developers need concrete guidance for security services – especially for things as abstract as identity protocols. Reference implementations show end-to-end examples of the identity protocol in practice. For a developer trying to “do it right” this is like finding diamonds in the backyard. The reference implementation is even more effective if it is backed up by testing tools that can verify developer implementations. Access management is a principal feature of API gateways. The gateway helps you enforce access controls, building in authentication and authorization services into the API set. Gateways typically rely on token-based security services, and support one or more token services such as SAML and OAuth. All API gateways offer authentication support, and most integrate with other identity sources to support federation. Gateways provide basic role-based authorization support, sometimes with fine-grained authorization to constrain data access by user identity or endpoint device. Beyond identity protocols, some gateways offer services to defend against attacks such as replay attacks and other forms of session hijacking. API gateways provide dynamic filtering of requests, allowing policy-based routing and response to API calls. Developers get tools to parse incoming calls, filter or transform messages, and then route to appropriate services. This facilitates modification of application function, debugging of application functions, and application of different security or compliance controls in response to user requests. Filters also provide a mechanism for sending requests to different locations, workflow modification, or even sending requests to different applications. This flexibility is a powerful security capability, particularly for analysis of and protection against suspect clients – access to services and data can be dynamically adjusted. API gateway providers offer a range of pre-deployment tools to validate applications prior to deployment. Sandbox testing and runtime simulators both validate correct API usage, and can also verify that the application developer properly handles input variables and simulated attacks. Some test harnesses are provided with gateways and others are custom implementations by API service owners. Pre-deployment validation is good a way to ensure all third-party developers meet a minimum security standard, and no single user becomes the proverbial weak link. If possible, tests should be executed as part of the normal integration process, (i.e., Jenkins) so implementation quality can be continually tested. Deployment Support The API catalog provides options for how to build security into your application, but API gateways also offer deployment support. When you are push APIs that connect the world to internal systems you need to account for a myriad of different threats at multiple network, protocol, application, and data layers. Denial of service, parser attacks, code injection, replay attacks, HTTP protocol abuse, network sniffing, and denial of service attacks are all things to consider. API gateways can optionally provide privacy and security for network sessions through SSL. Most also offer network firewall capabilities such as IP whitelisting, blacklisting, and signature-based detection. While network security is a must have for many, it’s not really their core value to security. The key security features are overall security of the API and message-level filtering. API gateways provide capabilities to detect code injection, cross-site scripting, and various encoding attacks; most also offer off-the-shelf filters for input validation and sanitization. Logging, Monitoring, and Reporting As an application platform API gateways capture activity and generate audit logs. Sitting between developer applications and the API, they are perfectly positioned to capture API usage – useful for throttling, billing, and metering API access, as well as security. Log files are essential for security, operations, and compliance, so these teams all rely upon gateway audit trails. Most API gateways provide flexible configuration of which audit events are collected, record format, and record destination. Audit capabilities are mostly designed for the gateway owner rather than developers. But the audit trail captures sessions of all

Share:
Read Post

Casting out SQLi

Ericka Chickowski posted an interview with the creators of the open source library AntiSQLi at Dark Reading. She is discussing a very interesting development tool, but the value proposition gets somewhat lost in the creators’ poor terminology. First some background: there is no such thing as an ‘unparameterized’ database query. Every SQL query has at least 2 parameters: The contents of the SELECT and WHERE clauses. Without parameters in those two clauses the query fails in the parser and generates an error. No parameters, no query. So SQLi is not really a problem of ‘unparameterized’ queries – it is a problem of unvalidated input values to parameters. SQLi is where we shove bad data into parameters – not a lack of parameters! The AntiSQLi library is simple and clever: it works like an app-side stored procedure. And like a stored procedure it forces datas type on its input values. It essentially handles the casting operation to force type and length. AntiSQLi weeds out variables that don’t match the prescribed data type, and extra long variables in some cases. Obviously it cannot catch everything but it does filter out many common and crude SQLi attacks. A better term would have been “un-cast query parameters”. Regardless of the terminology, though, I am happy to see innovation in this area. For years I have been recommending that developers build this functionality into their own reusable security libraries, but AntiSQLi is a quick and easy way to get started, and a nice tool to have in your toolbox. Share:

Share:
Read Post

Scamables

A post at PCI Guru got my attention this week, talking about a type of rebate service called Linkables. They essentially provide coupon discounts without physical coupons: you get money off your purchases for promotional items after you pay, rather than at the register. All you have to do is hand over your credit card. Really. Linkables are savings offers that can be connected to your credit or debit card to deliver savings to you automatically after you shop. It’s a simple and convenient way to take advantage of advertisers’ online and offline promotions, with no coupons to clip and no paperwork after you shop. Offers can be used online and offline just by using your credit or debit card. This idea is not really novel. Affinity groups have been providing coupons, cash, and price incentives for… well, forever. And Linkables is likely selling your transactional data, but with the added bonus of not having to pay major card brands or banks for the information. Good revenue if you can get it. But there is a big difference for consumer security when someone like Visa embeds this type of third party promotional application on a smart card – where Visa maintains control of your financial information – and handing out your credit card. I know we are supposed to be impressed that they have a “Level 1 PCI certification” – the kind of certification that is “good until reached for” – but the reality that is we have no idea how secure the data is. Sure, we hand over credit cards to online merchants all the time, but the law provides some consumer protection. Will that be true if a third party like Linkables suffers a breach? There won’t be any protection if they lose you debit card number and your account is plundered. I would much rather hand over my password to a stranger for a candy bar than my credit card for 10 cents off dishwasher detergent, paid some time in the future. I can reset my password but I cannot reset stupid. Share:

Share:
Read Post

Talking Head Alert: Adrian on Key Management

Tomorrow, June 20th, bright and early at 8:00am Pacific I will be talking about key management with the folks at Prime Factors. Actually, Prime Factors was kind enough to sponsor the educational webcast, but at this time I am flying solo on this one – no vendor presentation is on the agenda. I will look at key management a little differently that what we have presented in the past, more operationally than technically. Even if you know all about key management, dial in and let your boss think you’re getting continuing education while you space out. So grab a cup of coffee and listen in, and bring any questions you may have. You can register here. Share:

Share:
Read Post

Security Analytics with Big Data: Deployment Issues

This is the last post in our Security Analytics with Big Data series. We will end with a discussion of deployment issues and concerns for any big data deployment, and focus on issues specific to leveraging SIEM. Please remember to post comments or ask questions and I will answer in the comments. Install any big data cluster or SIEM solution that leverages big data, and you will notice that the documentation focuses on how to get up and running quickly and all the wonderful things you can do with the platform. The issues you really want to consider are left unsaid. You have to go digging for problems, but better find them now than after you deploy. There are several important items, but the single biggest challenge today is finding talent to help program and manage big data. Talent, or Lack Thereof One of the principal benefits of big data clusters is the ability to apply different programmatic interfaces, or select different query and data management paradigms. This is how we are able to do complex analytics. This is how we get better analyses from the cluster. The problem is that you cannot use it if you cannot code it. The people who manage your SIEM are probably not developers. If you have a Security Operations Center (SOC), odds are many of them have some scripting and programming experience, but probably not with big data. Today’s programmatic interfaces mean you need programmers, and possibly data architects, who understand how to mine the data. There is another aspect. When we talk to big data project architects, like SOC personnel trying to identify attacks in event data, they don’t always know what they are looking for. They find valuable information hidden in the data, but this isn’t simply the magic of querying a big data cluster – the value comes from talented personnel, including statisticians, writing queries and analyzing the results. After a few dozen – or hundred – rounds of query and review, they start finding interesting things. People don’t use SIEM this way. They want to quickly set a policy and have it enforced. They want alerts on malicious activity with minimal work. Those of you not using SIEM, who are building a security analytics cluster from scratch, should not even start the project without an architect to help with system design. Working from your project goals, the architect will help you with platform selection and basic system design. Building the system will take some doing as well as you need someone to help manage the cluster and programmers to build the application logic and data queries. And you will need someone versed in attacker behaviors to know what to look for and help the programmer stitch things together. There are only a finite number of qualified people out there today who can perform these roles. As we like to say in development, the quality of the code is directly linked to the quality of the developer. Bad developer, crappy code. Fortunately many big data scientists, architects, and programmers are well educated, but most of them are new to both big data and security. That brilliant intern out of Berkeley is going to make mistakes, so expect some bumps along the way. This is one area where you need to consider leveraging the experience of your SIEM vendor and third parties in order to see your project through. Policy Development Big data policy development is hard in the short term. Because as we mentioned above you cannot code your own policies without a programmer – and possibly a data architect and a statistician. SIEM vendors will eventually strap on abstraction interfaces to simplify big data query development but we are not there yet. Because of this, you will be more dependent on your SIEM vendor and third party service providers than before. And your SIEM vendor has yet to build out all the capabilities you want from their big data infrastructure. They will get there, but we are still early in the big data lifecycle. In many cases the ‘advancements’ in SIEM will be to deliver previously advertised capabilities which now work as advertised. In other cases they will offer considerably deeper analysis because the queries run against more data. Most vendors have been working in this problem space for a decade and understand the classic technical limitations, but they finally have tools to address those issues. So they are addressing their thorniest issues first. And they can buttress existing near-real time queries with better behavioral profiles, provide slightly better risk analysis by looking at more data, of more types. One more facet of this difficulty merits a public discussion. During a radical shift in data management systems, it is foolish to assume that a new (big data or other) platform will use the same queries, or produce exactly the same results. Vet new and revised queries on the new platforms to verify they yield correct information. As we transition to new data management frameworks and query interfaces, the way we access and locate data changes. That is important because, even if we stick to a SQL-like query language and run equivalent queries, we may not get exactly the same results. Whether better, worse, or the same, you need to assess the quality of the new results. Data Sharing and Privacy We have talked about the different integration models. Some customers we spoke with want to leverage existing (non-security) information in their security analytics. Some are looking at creating partial copies of data stored in more traditional data mining systems, with the assumption that lower cost commodity storage make the iterative cost trivial. Others are looking to derive data from their existing clusters and import that information into Hadoop or their SIEM system. There is no ‘right’ way to approach this, and you need to decide based on what you want to accomplish, whether existing infrastructure provides benefits big data cannot, and any network bandwidth issues with moving information between these systems. If you

Share:
Read Post

API Gateways: Access Provisioning

What do we want? API Access! When do we want it? Now! I’s time to change your entire mindset. We’re talking about API security, but not for traditional APIs. API gateways are a response to the “open API” movement, and create a very different development environment. As we mentioned in our introduction, API gateways are an enabling technology, but likely not the way you think they are. Companies want to expose their services to a wide audience, but rather than design and build consumer offerings in-house they often provide API access to their services to contractors – and in many cases to the general programming community. For companies like Twitter, Facebook, and YouTube (Google), the trend is to allow third-party developers to extend and integrate these platforms to provide novel new user experiences. It’s a win/win: the company gets to leverage innovations from the third-party community, users get better apps, and developers get paid (the average force.com developer makes $392k/year for their work. It can be an almost free way to leverage the best ideas in the development world – you just have to accept the risk of random people groping around your services and data. Leveraging the community for innovation and pro bono development raises some new security problems – how can you control your API while actively making it available outside your company? Not that many years ago, companies wrestled with serving up data to consumers outside the firewall – letting them write code to run on top of proprietary systems is downright scary. To provision developer access, and to control what they can use and how, you need some form of API management framework. API gateways are this framework. From the developer’s perspective they function like a traditional development environment: they bundle a number of features under one umbrella to provide basic tools developers need and make API integration as simple as possible. For the API provider, rich and accessible services encourage developers. The flipside is trying to manage developers who don’t work for your company, and giving up control over endpoints and user experience. Additionally, you need to control access and features through tokens and keys. 80% of your API gateway effort will focus on what developers need to leverage your service, but the most difficult 20% will be managing developers’ experience and exposing services, which requires attention on ease of use and hiding complexity from developers. API gateways are for extending features to developers, so most of our examples are from the development perspective. Our outline follows the path developers will take to use your APIs. We will start from ground zero, as developers register themselves to use your service, considering how you will provision developer access. Per the outline in our introduction we will then move into other areas of development tools, key management, and other critical areas of API security. On our journey we will straddle two realms: buyers and builders. For builders, we will show you examples of features you need to build into your platform. For those of you looking to acquire an API gateway, consider this a mirror image of your critical platform criteria and where you will need services to get your deployment over the finish line. Provisioning As simple as it may seem, provisioning for API gateways is a balancing act. On the one hand companies want simple streamlined access for developers to build functionality. On the other hand they want to ensure this all complies with security policies. How can you ensure security while providing developers with full access? What process will ensure the right mix of policy checkpoints without hampering developers? Therein lies the rub. Let’s look at a developer’s first step: getting access to the development environment. Developer access provisioning Perhaps you have heard that developers can be a tad mercurial? Development is about building and enabling, so security controls which restrict usage or limit functions are seen as an impediment and source of friction. Keeping developers on board with security policy is a challenge, especially when any number of them don’t even work for your company. Development tools are typically selected for ease of use, so streamlining access to tools and simplifying access to API functionality is critical. API gateways proxy communications to applications – they act as traffic cops to direct application requests according to policy. That middle ground is a vital place for security to focus for three reasons: It is a boundary between internal and external, making it an ideal place for policy enforcement. It is a logical place to monitor inbound and outbound access. It is where developers get everything they need to create applications. What do developers need to get started coding? They need to be vetted to the API, which means they need to get credentials. These credentials come as tokens and possibly certificates. API gateways should provide what developers need to find and bind to your API to begin coding. First, developers generally need to register with the gateway to initiate the key issuance process and get credentials to your API. This may require a few minutes for a simple automated process, or much longer for requests which require manual review. Once accepted, developers receive credentials – often only to a development and testing instance, with production access to follow. How this process works, and how simple it is to implement, are important factors for selecting an API gateway. How well can your candidates be tuned to your organizational needs? When building an API gateway, be realistic about what developers will tolerate in terms of delay and complexity. Grand processes with many steps tend to stop developers in their tracks. The API documentation is another major factor in simplifying developers’ lives. The favorite words of many developers are “for example”, typically followed by a code snippet and a usage explanation. The goal is to get developers up and running quickly so look for code samples, reference implementation, and test clients when you evaluate API gateways. A wide variety of languages are in play, so over time you will likely your own miniature

Share:
Read Post

Friday Summary: June 14, 2013

Are you aware of a theft of big data? I will ask in a slightly different way: Do you know of any instance where a commercial big data cluster was exposed to an attacker who mined the cluster for fun or profit? Hackers are unlikely to copy a big data set – why bother moving terabytes when they can use your cluster to store and process your data. I am unaware of any occurrences, public or private. And no, LexisNexis and ChoicePoint, where the attackers had valid user credentials, don’t count. Please comment if you know of an example. I ask because I have been reading about how vendors are combatting the billions of dollars of theft in the big data space, but I am unaware of any such big data cyberthefts. In fact I have not heard of one dollar being stolen. Unless you count the NSA collection of vast amounts of personal data as thieving, but I hope we can agree that is different in several ways. So my question stands: “Who was attacked? Where did the thefts occur?” I don’t want to deprecate security around big data clusters because we have not yet seen an attack – we do need cluster security, and I am certain we will eventually see attacks. But hyperbole won’t help anyone. Executive management teams have heard this FUD before. In the early days before CISO’s, security cried “Vulnerabilities will eat your grandmother!” one too many times, and management turned their collective backs. This round of FUD will not help IT teams get budget or implement security in and around big data clusters. Another question: Are you aware of any security analytics tools, policies, algorithms, or MapReduce queries that can detect a big data breach? I doubt it. Seriously doubt it. The application of big data and data mining for security is focused on fraud detection and bettering SIEM threat detection capabilities. As of this writing no SIEM tool protects big data. No one has written a MapReduce query to find “the bad buys” illegally using a big data cluster. Today that capability does not exist. We have only the most basic monitoring features to detect misuse of big data clusters from the Database Activity Monitoring vendors – they are so limited that they are barely worth mentioning. Of course I expect all this to change. We will see attacks on big data, and we will see more security tools focused on protecting it, and we will use analytics to detect misuse there as well as everywhere else. When that will change, I cannot say. After the first few big data breaches, perhaps? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR post: Why Database Assessment. Adrian’s white paper on What Every DBA Should Know. Favorite Securosis Posts Adrian Lane: Getting to Know Your Adversary. Mike Rothman: Security Analytics with Big Data: Integration. All Adrian needs is to mention either BYOD or APT in this blog series to hit the security marketing hyperbole trifecta! Kidding aside, he is doing a good job structuring the discussion of how to leverage big data to solve security problems. Other Securosis Posts We are all guilty of something. Talking Head Alert: Mike on Phishing Webcast. Incite 6/12/2013: The Wall of Worry. The Securosis Nexus Beta 2 Begins! Network-based Malware Detection 2.0: The Network’s Place in the Malware Lifecycle. Security Analytics with Big Data: Integration. DDoS: It’s FUD-eriffic! Quick thoughts on the iOS and OS X security updates. Groupthink Kills Your Security Layers. A truism of security information sharing. Getting to Know Your Adversary. Friday Summary: June 7, 2013. Favorite Outside Posts Rich: Gartner Reveals Top 10 IT Security Myths. Not sure this is the top 10 but it is a good list. Item 3 lacks nuance, however. Adrian Lane: Upcoming revelations speculations. Robert Graham has been on a roll lately. This ‘revelations’ post is a fun read, throwing scenarios out there and seeing what’s plausible, furthering the Snowden Leaks story line. The Skype speculation is unsettling – it is both entirely plausible and simultaneously sounds totally insane to normal people: two common elements of many declassified cold war stories. Mike Rothman: Sacke Notes: Cofficers – A New Breed in a New Economy. I will probably use this as an Incite topic, but it’s a pretty good view into my working lifestyle. In fact I am in my coffee shop office now putting this link in. How perfect is that? Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts The Secret War. Profile of the man running US ‘cyber-war’ efforts. Microsoft Disrupts Citadel botnet. Facebook Unveils Presto. Speedy replacement for Hive. Cyber Security and the Second Amendment. Banker’s Nap Costs Millions. Lawsuit filed over NSA phone spying program. Microsoft Security Bulletin Summary for June 2013. Democratic Senator Defends Phone Spying, And Says It’s Been Going On For 7 Years. Expert Finds XSS Flaws on Intel, HP, Sony, Fujifilm and Other Websites. Blog Comment of the Week This week’s best comment goes to -ds, in response to A truism of security information sharing. Maybe information sharing will be easier now that we know the NSA have it all already. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.