Securosis

Research

Friday Summary: Cloud Identity Edition

One of my favorite industry events was last week, the 2013 Cloud Identity Summit. Last year’s was in Vail, Colorado, so I thought this year couldn’t top that. Wrong. This year was at the Mertiage in Napa – nice hotel, nice Italian restaurant, stunningly helpful staff, and perfect weather made for a great week. And while I was sorely tempted to tour the Napa Valley, I found the sessions too compelling to skip out. Here are a few of the highlights: AZA vs. KNOX: As I mentioned earlier this week, while 2012 centered on infrastructure and identity standards (OAuth, OpenID Connect, and SAML) to enable cloud services, 2013 focused on mobile client authentication and Single Sign-On. SSO is still the challenge, but now primarily for mobile devices, and that is not yet fully sorted. This is important because mobile security is itself an identity problem. These technologies give you a glimpse of where we are going after BYOD, MDM, and MAM. Between my KNOX vs. AZA mobile throwdown and Gunnar’s Counterpoint: KNOX vs. AZA throwdown we covered the high points of the discussion. WebDevification: An informal poll – okay, the dozen or so people I asked – felt Eve Mahler’s presentation was the best of the week. Her observations on the ‘webdevification’ trend that mashes third-party APIs, cloud, and mobile really hit the conference’s central themes. API gateways and authentication tools like OAuth that support that evolution, are turning traditional development paradigms on their ears. More importantly, from a security standpoint, they show that we can build security in without requiring developers to be security experts. Slow cloud IAM adoption curve: Like the cloud in general, adoption of IdaaS has been somewhat slow. But moving to IdaaS is conceptually daunting. I liken the change to moving from an Earth-centric to a sun-centric view of the solar system. With IAM we are moving from on-premise to a cloud-centric view of IT. Ping’s CEO Andre Durand did a nice job outlining the typical client maturity curve of SSO to SaaS integration to Federation to IdaaS, but the industry as a whole is still struggling at the halfway point. Why? Complexity and compliance. Complexity because federated identity has a lot of moving parts, and how we do fine-grained authorization and provisioning is still undecided. More worrisome is moving confidential data outside the enterprise without appropriate security and compliance controls. These controls and reports exist, but enterprises don’t trust them… yet. But Andre made a great point: We had the same reservations about email, but once we standardized the SMTP interface email became a commodity. The result was firms like Hotmail, and now most firms rely upon outsourced email services. 2FA on mobile: I Tweeted: “Am I still the only one who thinks mobile browser based 2FA is kludgy?” at CIS. Because SMS would be my first choice, but it is not available on all devices. HTTPS is a secure protocol available on all mobile platforms, so it’s a great choice. But my problem is not the protocol – it’s the browser. Don’t design a new security system around one of the most problematic products for security. XSS and CSRF still apply, and building new systems on top of vulnerable ones justs enables a whole new class of attacks. Better to find a secure way to pass challenge to mobile devices – otherwise use thumbprints, eyeball scans, voice, or facial recognition instead. FIDO: Due to the difficulties standardizing authorization on different mobile platforms, the FIDO alliance, which stands for Fast IDentity Online, is developing an open user authentication standard. I hadn’t paid close attention to this effort before the conference, but what they presented was a sensible approach to minimum requirements to authenticate a user on a mobile device. Befitting the conference theme, their idea is to minimize use of passwords, enable easier/better/faster authentication, and help the community link cloud services together. This is one of the few clean and simple identity standards I have see so I recommend taking a quick look. CIS is still a young conference, and still very developer-centric, which I find refreshing. But the amazing aspect is that it’s a family event: of 800 people, about 200 were wives and children of attendees. Each night a hundred-plus kids played right alongside the evening festivities. This is the only ‘community’ trade event I have been to that is actually building a real community. I highly recommend CIS if you are interested in learning about the cutting edge of identity and authorization. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike’s DR post on Controlling the Big 7. Favorite Securosis Posts Adrian Lane The Temptation of the Developer. A scarier “insider threat”. David Mortman: Intel Software Guard Extensions (SGX) Is Mighty Interesting. Mike Rothman: Counterpoint: KNOX vs. AZA Throwdown. Great research (or anything really) requires an idea, and then smart folks to poke holes in it to make it better. It was great to see Gunnar make great counterpoints to Adrian’s post, which was also great. That’s why we hang out with smart guys: they make us smarter. Rich: PCI Standards Flow Downstream. Ah, PCI. Other Securosis Posts Google may offer client-side encryption for Google Drive. Incite 7/17/2013: 80 años. Favorite Outside Posts David Mortman: How Experts Think. Mike Rothman: Dropbox, WordPress Used As Cloud Cover In New APT Attacks. Hiding in plain sight. With cloud services aplenty we will see much more of this – which makes detection that much harder. Adrian: Malware Hidden Inside JPG EXIF Headers. There are too many ways to abuse users through browsers. Rich: Kali Linux on a Rasberry Pi. Years ago I struggled to get Metasploit running on my wireless router as part of my DEFCON research. I never pulled it off, but this sure would have made life easier. Research Reports and Presentations Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments.

Share:
Read Post

Endpoint Security Buyer’s Guide: Anti-Malware, Protecting Endpoints from Attacks

After going over the challenges of protecting those pesky endpoints in the introductory post of the Endpoint Security Buyer’s Guide, it is now time to turn our attention to the anchor feature of any endpoint security offering: anti-malware. Anti-malware technologies have been much maligned. In light of the ongoing (and frequently successful) attacks on devices ‘protected’ by anti-malware tools, we need some perspective – not only on where anti-malware has been, but where the technology is going, and how that impacts endpoint security buying decisions. History Lesson: Reacting No Bueno Historically, anti-malware technologies have utilized virus signatures to detect known bad files – a blacklist. It’s ancient history at this point, but as new malware samples accelerated to tens of thousands per day, this model broke. Vendors could neither keep pace with the number of files to analyze nor update their hundreds of millions of deployed AV agents with gigabytes of signatures every couple minutes. So anti-malware vendors started looking at new technologies to address the limitations of the blacklist, including heuristics to identify attack behavior within endpoints, and various reputation services to identify malicious IP addresses and malware characteristics. But the technology is still inherently reactive. Anti-malware vendors cannot protect against any attack until they see and analyze it – either the specific file or recognizable and identifiable tactics or indicators to watch for. They need to profile the attack and push updated rules down to each protected endpoint. “Big data” signature repositories in the cloud, cataloging known files both safe and malicious, have helped to address the issues around distributing billions of file hashes to each AV agent. If an agent sees a file it doesn’t recognize, it asks the cloud for a verdict. But that’s still a short-term workaround for a fundamental issue with blacklists. In light of modern randomly mutating polymorphic malware, expecting to reliably match identifiable patterns would be unrealistic – no matter how big an signature repository you build in the cloud. Blacklists can block simple attacks using common techniques, but are completely ineffective against advanced malware attacks from sophisticated adversaries. Anti-malware technology needs to evolve, and it cannot rely purely on file hashes. We described the early stages of this evolution in Evolving Endpoint Malware Detection, so we will summarize here. Better Heuristics You cannot depend on reliably matching what a file looks like – you need to pay much more attention to what it does. This is the concept behind the heuristics that anti-malware offerings have been built on in recent years. The issue with those early heuristic offerings was having enough context to know whether an executable was taking a legitimate action. Malicious actions were defined generically for a device, generally based on operating system characteristics, so false positives (blocking a legitimate action) and false negatives (failing to block an attack) were both common: lose/lose. The heuristics have evolved to factor in authorized application behavior. This advancement has dramatically improved heuristic accuracy, because rules are built and maintained for each application. Okay, not every application, but at least the 7 applications targeted most often by attackers (browsers, Java, Adobe Reader, Word, Excel, PowerPoint, and Outlook). These applications have been profiled to identify authorized behavior. And anything unauthorized is blocked. Sound familiar? Yes, this is a type of whitelisting: only authorized activities are allowed. By understanding all the legitimate functions within a constrained universe of frequently targeted applications, a significant chunk of attack surface can be eliminated. To use a simple example, there really aren’t any good reasons for a keylogger to capturing keystrokes while filling out a form on a banking website. And it is decidedly fishy to take a screen grab of a form with PII on it. These activities would have been missed previously – both screen grabs and reading keyboard input are legitimate functions – but context enables us to recognize and stop them. That doesn’t mean attackers won’t continue targeting operating system vulnerabilities, other applications (outside the big 7), or employees with social engineering. But this approach has made a big difference in the efficacy of anti-malware technology. Better Isolation The next area of innovation on endpoints is the sandbox. We have talked about sandboxing malware within a Network-based Malware Detection Device, which also enables you to focus on what the file does before it is executed on a vulnerable system. But isolation zones for testing potentially malicious code are appearing on endpoints as well. The idea is to spin up a walled garden for a limited set of applications (the big 7, for example) that shields the rest of the device from those applications. Many of us security-aware individuals have been using virtual machines on our endpoints to run these risky applications for years. But this approach only suited the technically savvy, and never saw broad usage within enterprises. To find any market success, isolation products must maintain a consistent user experience. It is still pretty early for isolation technologies, but the approach – even down to virtualizing different processes within the OS – shows promise. It is definitely one to keep an eye on. Of course it is important to keep in mind that sandboxes are not a panacea. If the isolation technology utilizes any base operating system services (network stacks, printer drivers, etc.), the device is still vulnerable to attacks on those services – even running in an isolated environment. So an isolation technology doesn’t mean you don’t have to manage the hygiene (patching & configuration) on the device, as we will discuss in the next post. Total Lockdown Finally, there is the total lockdown option: defining an authorized set of applications/executables that can run on the device and blocking everything else. This Application Whitelisting (AWL) approach has been around for 10+ years, and still remains a niche use case for endpoint protection. But it is not mainstream and unlikely ever to be, because it breaks the end-user experience, and end users don’t like it much. If an application an employee wants to run isn’t authorized, they are out of business – unless either IT manages a very quick

Share:
Read Post

New Paper: Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment

Detecting malware feels like a losing battle. Between advanced attacks, innovative attackers, and well-funded state-sponsored and organized crime adversaries, organizations need every advantage they can get to stop the onslaught. We first identified and documented Network-Based Malware Detection (NBMD) devices as a promising technology back in early 2012, and they have made a difference in detecting malware at the perimeter. Of course nothing is perfect, but every little bit helps. But nothing stays static in the security world so NBMD technology has evolved with the attacks it needs to detect. So we updated our research to account for changes in scalability, accuracy, and deployment: The market for detecting advanced malware on the network has seen rapid change over the 18 months since we wrote the first paper. Compounding the changes in attack tactics and control effectiveness, the competition for network-based malware protection has dramatically intensified, and every network security vendor either has introduced a network-based malware detection capability or will soon. This creates a confusing situation for security practitioners who mostly need to keep malware out of their networks, and are uninterested in vendor sniping and trash talking. Accelerating change and increasing confusion usually indicate it is time to wade in again and revisit findings to ensure you understand the current decision criteria – in this case of detecting malware on your network. So this paper updates our original research to make sure you have the latest insight for your buying decisions. The landing page is in our Research Library. You can also download Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment (PDF) directly. We would like to thank Palo Alto Networks for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our research. Share:

Share:
Read Post

PCI Standards Flow Downhill

Payment gateways and payment processors have to pass PCI requirements just like merchants do. And they don’t like it any more than you do, as evidenced by recent post by Stephen Ames of Shift4. He is pissed about a new interpretation of PA-DSS, provided to his QSA outside the officially published guidance and standards, which places PA-DSS section 4.2.7 always in scope. From the post: However, my PA-QSA stated that PA-DSS Requirement 4.2.7 is now always in scope, regardless of whether or not there is a user database within the application. … I’ve searched the PA-DSS for a security requirement that aligns with PCI DSS 11.5 – File Integrity Monitoring – and there are none. I’m certain that most application vendors would not take responsibility for file integrity monitoring at merchant sites. And I’m unable to understand why the SSC is forcing that upon application vendors, when they don’t even have that requirement written into the PA-DSS. I searched the PCI FAQ database and found no reference to a reinterpretation of PA-DSS Requirement 4.2.7 requiring vendors to take responsibility for file integrity monitoring of their PA-DSS applications running in merchant environments. Once again, PA-DSS Requirement 4.2.7 aligns with DSS Requirement 10.2 and user access, not DSS Requirement 11.5. … and … “The SSC sends out compliance guidance to the assessor community.” … it now appears the PCI SSC has fallen back into its old ways of keeping participating organizations in the dark. While file activity monitoring – and database activity monitoring as well – are often used as compensating controls for PCI-DSS section 10.2, they are not prescribed in the standard. But rather than accept an ‘always-on’ requirement – and what policies would be appropriate without a database to monitor? – Mr. Ames is trying to engage the community to devise a rational policy for when to apply monitoring and when not to. But Stephen is not going to get a better response than “those assessors are drinking the PCI Kool-Aid”. No matter whether his arguments make sense or not. They cannot. Several assessors I know have received phone calls from the PCI Council after writing blog posts or comments that interpreted – or worse, ameliorated – PCI scripture. They were reminded that they must always frame the PCI standard in a positive light or forfeit their ability to remain an assessor. So no frank public discussion will take place. This sort of thing has been going on for a long time without signs of getting better. The PCI publishes the PCI standards, which are insulated from public critique by the mandatory requirements signed by assessors and participating organizations. So even the most knowledgable parties who advised the council can’t speak out because that would break their agreements! That’s why, when things like non-guidance guidance are published, there is little subsequent discussion. By design, information only flows in one direction: downhill. Share:

Share:
Read Post

Google may offer client-side encryption for Google Drive

From Declan McCullagh at CNet: Google has begun experimenting with encrypting Google Drive files, a privacy-protective move that could curb attempts by the U.S. and other governments to gain access to users’ stored files. Two sources told CNET that the Mountain View, Calif.-based company is actively testing encryption to armor files on its cloud-based file storage and synchronization service. One source who is familiar with the project said a small percentage of Google Drive files is currently encrypted. Tough technical problem for usability, but very positive if Google rolls this out to consumers. I am uncomfortable with Google’s privacy policies but their security team is top-notch, and when ad tracking isn’t in the equation they do some excellent work. Chrome will encrypt all your sync data – the only downside is that you need to be logged into Google, so ad tracking is enabled while browsing. Share:

Share:
Read Post

Incite 7/17/2013: 80 años

If you want a feel for how long 80 years is, here are a few facts. In 1933, the President was Herbert Hoover until March, when FDR became President. The Great Depression was well underway in the US and spreading around the world. Hitler first rose to power in Germany. And Prohibition was repealed in the US. I’ll certainly drink to that. Some famous folks were born in 1933 as well. Luminaries such as Joan Collins, Larry King, and Yoko Ono. Have you seen Larry or Yoko lately? Yeah, 80 seems pretty old. Unless it’s not. My father-in-law turned 80 this year. In fact his birthday was yesterday and he looks a hell of a lot better than most 80-year-olds. He made a joke at his birthday party over the weekend that 80 is the new 60. For him it probably is. He has been both lucky and very healthy. We all think his longevity can be attributed to his outlook on life. He has what we call jokingly _the Happy Gene.. In the 20 years I have been with the Boss I have seen him mad twice. Twice. It’s actually kind of annoying – I probably got mad twice already today. But the man is unflappable. He’s a stockbroker, and has been for 35 years, after 20 years in retail. Stocks go up, he’s cool. Stocks go down, he’s cool. Clients yell at him, he’s cool. He just doesn’t get bent out of shape about anything. He does get fired up about politics, especially when I intentionally bait him, because we see things from opposite sides. He gets excited about baseball and has been known to scream at the TV during Redskins games. But after the game is done or the discussion is over, he’s done. He doesn’t hold onto anger or perceived slights or much of anything. He just smiles and moves on. It is actually something I aspire to. The Boss said a few words at his party and summed it up perfectly. She had this entire speech mapped out, but when I heard her first sentence I told her to stop. It’s very hard to sum up a lifestyle and philosophy in a sentence, but she did it. And anything else would have obscured the beauty of her observation. Worry less, enjoy life more. That’s it. That’s exactly what he does, and it has worked great for 80 years. It seems so simple yet it’s so hard to do. So. Hard. To. Do. But for those, like my father-in-law, who can master worrying less… a wonderful life awaits. Even when it’s not so wonderful. Happy Birthday, Sandy. I can only hope to celebrate many more. –Mike Photo credit: “Dad’s 80th Birthday Surprise” originally uploaded by Ron and Sandy with Kids Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Endpoint Security Buyer’s Guide Introduction Continuous Security Monitoring Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Implementation Key Management Developer Tools Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Incite 4 U Social responsibility: Before I get too far I need to acknowledge that this is definitely a bit of he said/she said. Now that has been put out there, what we know is that Microsoft released a patch for a bug discovered and released on the full disclosure list by security researcher Tavis Ormandy (who works for Google, but I think that’s incidental here). Microsoft stated last week that the bug is being actively used in targeted attacks after it was disclosed.. Tavis was clear that he didn’t notify Microsoft before posting the details publicly. Here’s what I think: we all have a social responsibility. While MS may have treated Tavis poorly in the past – justified or not – his actions put the rest of us at risk. It’s about users, not Microsoft or the researcher. If Tavis knew the bug was being used in the wild, I support full disclosure. If the vendor doesn’t respond or tries to cover it up and users are at risk, disclose publicly and quickly. But at least give them a chance, which requires thinking about the impact on everyone else first. To be balanced, vendors have a responsibility to respond in a timely fashion, even if it isn’t convenient. But to release a bug with no evidence that anyone else is using it? That doesn’t seem responsible. – RM Identity theft fullz up: Interesting research hit this week from Dell SecureWorks, per a Dark Reading article, about seeing complete packets of stolen information (fullz), including healthcare information, appearing in marketplaces for $1,200 or so. With a full kit an attacker would have everything they need for identity theft, including counterfeit documents such as driver’s licenses and health insurance cards. Also interesting were that credit cards with CVV can be had for $1-2 each, although a prestige card (such as AmEx Black) can cost hundreds. This is Mr. Market at work. Prices for commodities go down but valuable information still demands a premium. It appears online game accounts are a fraud growth market because turning virtual items into real money can be easier due to less stringent fraud detection. – MR Easier than making coffee: Gunnar Peterson’s keynote at CIS2013 was full of valuable witticisms (the entire presentation is on his blog) – but made a particularly profound point regarding code security: it needs to be so easy that a seventeen-year-old can do it time and time again without fail. Gunnar drew a parallel between Five Guys’ recipe for burger success against behemoth competitors:

Share:
Read Post

The Temptation of the Developer

Threat modeling involves figuring out ways the system can be gamed and your [fill in the blank] can be compromised. Great modelers can take anything and come up with new ways to question the integrity of the system. When it comes to 0-day attacks, many tend to focus on increasingly sophisticated fuzzers and other techniques to find holes in code, like the tactics described in the Confessions of a Cyber Warrior interview. But Jeremiah takes a step back to find yet another logic flaw in our assumptions of our adversaries in this succinct and eye opening tweet. As 0-days go for 6 to 7 figures, imagine the temptation for rogue developers to surreptitiously implant bugs in the software supply chain. I can see it now. The application security marketing engine will get ramped up around the rogue developer threat, which is just another form of the insider attack bogey man, starting in 3, 2, 1… But the threat is real. The real question is whether awareness of this kind of adversary would change how you do application security. I’m sure my partners (and many of you) have opinions about what should be done differently. Share:

Share:
Read Post

Intel Software Guard Extensions (SGX) Is Mighty Interesting

I am in a bit over my head here, but take a look at the first two presentations at the Workshop on Hardware and Architectural Support for Security and Privacy. Intel is preparing to introduce a new capability in their processors to support use of secure encrypted memory spaces on commodity CPUs. Their objective is to provide applications with a secure ‘enclave’ (their term) with a protected memory and execution space. It’s called Intel Software Guard Extensions (SGX). This could be significant – especially for battling malware and cloud computing. Think secure key management in the cloud with hardware-enforced sandboxes on endpoints. Developers will need to code their software to use the feature, so this isn’t an overnight fix. However… It seems like a powerful tool to battle malware on endpoints, especially if operating system manufacturers leverage the capability in Windows and OS X to further improve their sandboxes. And imagine a version of Java or Flash that’s fully isolated. This could offer material improvements to hypervisor security – for example by eliminating memory parsing attacks. And encrypted memory should mean volatile memory (RAM) is even protected from cloud administrators trying to peek at encryption keys. HSM vendors should also keep an eye on this because it might offer comparable security to hardware-based key managers (but probably not for key generation and a few other important pieces, for those who need them). Think of virtual HSMs and key managers that run within the cloud, without the worry of keys being compromised in memory. It looks extremely interesting but I freely admit that some of it is over my head. But if I am reading right, the long-term potential to improve security is impressive. Share:

Share:
Read Post

FireStarter: KNOX vs. AZA mobile throwdown

A group of us were talking about key takeaways for the 2013 Cloud Identity Summit last week in Napa. CIS 2012 focused on getting rid of passwords; but the conversation centered on infrastructure and identity standards such as OAuth, OpenID Connect, and SAML, which provide tool to authenticate users to cloud services. 2013 was still about minimizing usage of passwords, but focused on the client side where the “rubber meets the road” with mobile client apps. Our discussion highlighted different opinions regarding the two principal models presented at the conference for solving single sign-on (SSO) issues for mobile devices. One model, the Authorization Agent (AZA) is an app that handles authentication and authorization services for other apps. KNOX is a Samsung-specific container that provides SSO to apps in the container. It’s heartening to hear developers stress that unless they get the end user experience right, the solution will not be adopted. No disagreement on that but buyers have other issues of equal importance, and I think we are going to see mobile clients embrace these approaches over the next couple years so it is worth discussing the issues in an open public forum. So I am throwing out the first pitch in this debate. Statement I believe the KNOX “walled garden” mobile app authentication model offers a serious challenge to Authorization Agents (AZA) – not because KNOX is technically superior but because it provides a marginally better user experience while offering IT better management, stronger security, and a familiar approach to mobile apps and data security. I expect enterprises to be much more comfortable with the KNOX approach given the way they prefer to to manage mobile devices. I am not endorsing a product or a company here – just saying I believe the subtle difference in approach is very important to the buyers. Problem User authentication on mobile devices must address a variety of different goals: a good user experience, not passing user IDs and passwords around, single sign-on, support for flexible security tokens, Two-Factor Authentication (2FA) or equivalent, and data security controls – just to name a few. But the priority is to provide single sign-on for corporate applications on mobile devices. Unfortunately the security model in most mobile operating systems is primarily intended to protect apps for other apps, so SSO (which must manage authentication for multiple other apps) is a difficult problem. Today you need to supply credentials for every app you use, and some apps require re-authentication whenever you switch between apps. It gets even worse if you use lengthy passwords and a password manager – the process looks something like this: You start the app you need to run, bounce over to the password manager, log into the password manager, grab credentials, bounce back to the original application, and finally supply credentials (hopefully pasting them in so you don’t forget or make an invisible typo). At best case it’s a pain in the ass. Contrasting Approaches Two approaches were discussed during CIS 2013. I will simplify their descriptions, probably at the expense of precision, so please comment if you believe I mischaracterized either solution. First, let’s look at the AZA workflow for user authentication: The AZA ‘agent’ based solution is essentially an app that acts as a gateway to all other (corporate) apps. It works a bit like a directory listing, available once the user authenticates to the AZA agent. The workflow is roughly: a. The app validates the user name and password (1.). b. The app presents a list of apps which have been integrated with it. c. The user selects the desired app, which requests authentication tokens from an authorization server (2.). d. The tokens enable the mobile application to communicate with the cloud service (Box, Evernote, Twitter, etc). If the service requires two-factor authentication the user may be provided with a browser-based token (3.) to supplement their username and password. e. The user can now use the app (4.). For this to work, each app needs to be modified slightly to cooperate with the AZA. KNOX is also an agent but not a peer to other apps – instead it is a container that manages apps. The KNOX (master) app collects credentials similarly to AZA, and once the container app is opened it also displays all the apps KNOX knows about. The user-visible difference is that you cannot go directly to a corporate app without first validating access to the container. But the more important difference for data security is that the container provides additional protection to its apps and stored data. The container can verify stack integrity, where direct application logins do not. KNOX also requires apps be slightly modified to work within in the container, but it does not require a different authentication workflow. User authentication for KNOX looks like this – but not on iOS: Rationale Both approaches improvement on standalone password managers, and each offers SSO, but AZA is slightly awkward because most users instinctively go directly to the desired app – not the AZA service. This is a minor annoyance from a usability standpoint but a major management issue – IT wants to control app usage and data. Users will forget and log directly into productivity apps rather than through the AZA if they can. To keep this from happening AZA providers need the app vendor to alter their apps to a) check for the presence of an AZA, b) force users through the AZA if present, and c) pass user credentials to the AZA. The more important issue is data security and compliance as drivers for mobile technologies. The vast majority of enterprises use Virtual Desktop Infrastructure (VDI) to manage mobile data and security policy, and the KNOX model mirrors the VDI model. It’s a secure controlled container, rather than a loosely-coupled federation of apps linked to an authorization agent. A container provides a clear control model which security organizations are comfortable with today. A loose confederation of applications cannot guarantee data security or policy enforcement the way containers can. One final point on buying centers: buyers do not look for the ‘best’

Share:
Read Post

Counterpoint: KNOX vs. AZA throwdown

Adrian makes a number of excellent points. Enterprises need better usability and management for mobile devices, but co-mingling these goals complicates solutions. Adrian contrasted two approaches: AZA and KNOX, which I also want to discuss. Let me start by saying I think we are in the first or second inning for mobile. I do not expect today’s architectural choices to stick for 10+ years. I think we will see substantial evolution, root and branch, for a while. Here is a good example of a mobile project: The Wall St. Journal just published their 1,000th edition on iPad. It is a great example of a mobile app, works in both offline and online modes, is easy to navigate and packed with information (okay – just ignore the editorial page) – it is a great success. The way they started the project is instructive: Three and a half years ago, The Wall Street Journal locked six people in a windowless room and threw down a secret challenge: Build us an iPad app. You have six weeks. And so we did. We started with a blank slate–no one had ever seen a tablet news app before. This is not uncommon for mobile projects. A few takeaways: We are learning our lessons as we go. There is an architectural vision but it evolves quickly and adapts, and did I mention we are leaning as we go? Evolution today is less about enterprise-level grand architecture (we already have those, called iOS and Android, themselves evolving while we scramble to keep up) – it is incremental improvement. Looking at AZA vs. KNOX from ground level, I see attractive projects for enterprise, with AZA more focused the here and now. KNOX seems to be shooting for DOD today, and enterprise down the road. This all reminds me of how Intel does R&D. They roll out platforms with a tick/tock pattern. Ticks are whole new platforms and tocks are incremental improvements. To me AZA looks like classic tock: it cleans up some things for developers, improves capabilities of existing systems, and connects some dots. KNOX is a tick: it is a new ballgame, new management, and a new way to write apps. That doesn’t mean KNOX cannot succeed, but would the WSJ start a new project by learning a new soup-to-nuts architecture just to handle security requirements (remember: you need to launch in six weeks)? I know we as security people wish they would, but how likely is that in the near term, really? The positive way to look at this choice is that, for a change, we have two interesting options. I may be overly pessimistic. It is certainly possible that soup-to-nuts security models – encompassing hardware, MAC, Apps, Platforms – will rule from here on out. There is no doubt plenty of room for improvement. But the phrase I keep hearing on mobile software projects is MVP: Minimum Viable Product. KNOX doesn’t fit that approach – at least not for most projects today. I can see why Samsung wants to build a platform – they do not want to be just another commoditized Android hardware manufacturer, undifferentiated from HTC or Googorola. But there is more to it than tech platforms – what do customers want? There is at least one very good potential customer for KNOX, with DOD-type requirements. But will it scale to banks? Will KNOX scale to healthcare, manufacturing, and ecommerce? That is an open question, and app developers in those sectors will determine the winner(s). Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.