Securosis

Research

Friday Summary: January 28, 2011

At Cal, even though my major was software, I had to take several electronics courses. When I got to college I had programming experience, but not the first clue about electronics. Resistors, LEDs, logic gates, karnaugh maps, and EPROMs were well outside my understanding. But within the first few weeks of classes they had us building digital alarm clocks and television remote controls from scatch. The first iterations were all resistors on breadboards, then we moved to chips and EEPROMs… which certainly made the breadboards neater. Things got much more complex a couple semesters in, when we had to design and implement CPUs – and the design not only had to work, but it actually had to meet design specifications for low power, low chip count, and high clock rates. Regardless, I loved the hardware classes, and I gave serious consideration to changing my major from software to hardware. But that pretty well died when I left college. Over the last couple months I have been picking up some basic projects for fun. Little stuff like replacing light bulbs with LEDs in an old stereo receiver, putting automated light switches into some of the wall plates, and making my own interconnect cables. A new multimeter and soldering iron, and I was off to the races. Pretty simple stuff, but then I wanted to do something a little more complex. I had a couple ideas but wanted to see if other people had already done something similar. As with most projects, I consulted The Google, and that’s when I stumbled on the world of Arduino. This little device keeps coming up on chat boards for all the projects I was looking at. I start doing my research I found the Arduino documentary which resulted in one of those “Oh, holy $#^!” moments. As long as I have been around software and participated in open source software projects, I had never considered the possibility of open source hardware. About 1/3 of the way into the documentary, they talk about physically creating objects from open source plans, using Arduino as the controller, and creating complex electronic control systems by assembling simple circuits other people have posted on the net. There are all sorts of how-tos on digital audio converters and, since Arduino offers the basic infrastructure to communicate with the computer through a USB port, it provides a common controller interface. Technically I have been aware of Arduino for a couple years now, as I see them at DEFCON, but I never really thought about owning one. My impression was that it was a toy for instructional purposes. That assessment is way off the mark. I mean, screwdrivers and hammers are incredibly simple tools, but essential when working on your home improvement/car/whatever. This thing is a simple-to-use but very powerful tool for interfacing computers and other logic controllers with just about any electronic device. I am sure those of you who have been playing with these for a few years are saying “Well, duh!”, so I acknowledge I am late to the party. But if you are not aware of this little device, it’s a cool tool with hundreds of easy examples for learning about electronics. So I just placed my order for a starter set, and am now looking for plans to build my own DAC for my iMac. I am hopeful it will sound better than the standard ones you can buy. Playing with malicious USB drives sounds interesting as well. And don’t forget our Cloud Security Alliance training February 13th in San Francisco! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike Rothman: Firewalls are Evolving. Adrian’s DB2 Security Overview white paper. Nice mention by Schwartz Communications. Favorite Securosis Posts Mike Rothman: The Greenfield Project. I know it’s lame to vote for yourself. But this is a great thought experiment. Rich: Microsoft, Oracle, or Other. Not really about security, but Adrian does a great job explaining the current database market drivers. Adrian Lane & David Mortman: Intel’s Red Herring. Other Securosis Posts React Faster and Better: Organizing for Response. Register for Our Cloud Security Training Class at RSA. Incite 1/25/2011: The Real-Time Peanut Gallery. Rich at Macworld. Friday Summary: January 21, 2010. Favorite Outside Posts Mike Rothman: He Who is Not Busy Being Born is Busy Dying. What Gunnar said. Yes, we do security, but we need to get smarter about the business. Period. Rich: The New School on the Ponemon data breach study. While Larry’s methodology has improved significantly, I think the cost-per-record-lost metric is one of the most misleading in our industry. There is no way it will accurately reflect your own losses with such wide variation between organizations. Adrian Lane: Russell eviscerates the Ponemon study. Pepper: Android Trojan details. Multiple very clever and very naughty bits combine to ‘hear’ and exfiltrate spoken or punched-in credit card data. David Mortman: Seven Dirty Words of Cloud Security. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. Top News and Posts Apple Taps Former Navy Information Warrior David Rice for Global Director of Security. Five men arrested on a charge of launcing pro-WikiLeaks DDoS attacks. Facebook hack apparently an API bug. Accounts were not hijacked. Exclusive: Q&A with hacker “srblche srblchez”. Android Trojan Collects Credit Card Details. “White Space” tracking database. Not security news, but an interesting look at some of behind-the-scene details on reuse of TV spectrum and Google’s thirst for data. Opera Security Flaw Fixed. Goatse Security Site Hacked. DHS to End Color-Coded ‘Threat Level’ Advisories. I know many of you are crying in a corner, asking how you can conduct yourselves without the big colorful fear-o-meter.

Share:
Read Post

Microsoft, Oracle, or Other

I ran across Robin Harris’s analysis of the Hyder transaction database research project, and his subsequent analysis on how Microsoft could threaten Oracle in the data center on his ZDNet blog. Mr. Harris is raising the issue of disruption in the database market, a topic I have covered in my Dark Reading posts, but he is also pointing out how he thinks this could erode Oracle’s position in the data center. I think looking at Hyder and like databases as disruptive is spot on, but I think the effects Mr. Harris outlines are off the mark. They both miss the current trends I am witnessing and seem to be couched in the traditional enterprise datacenter mind set. To sketch out what I mean, I first offer a little background. From my perspective, during the Internet boom of the late 90’s, Oracle grew at a phenomenal rate because every new development project or web site selected Oracle. Oracle did a really smart thing in that they made training widely available so every DBA I knew had some Oracle knowledge. You could actually find people to architect and manage Oracle, unlike DB2, Sybase and Informix (SQL Server was considered a ‘toy’ at the time). What’s more, the ODBC/JDBC connectors actually worked. This combination made development teams comfortable with choosing Oracle, and the Oracle RDBMS seemed ubiqitous as small firms grew out of nothing. Mid-sized firms chose databases based upon DBA analysis of requirements, and they tended to skew the results to the platforms they knew. But this time it’s different. This latest generation of developers, especially web app developers, are not looking for transactional consistancy. They don’t want to be constrained by the back end. And most don’t want to be burdened by learning about a platform that does not enhance the user experience or usability of their applications. Further, basic application behavior is changing in the wake of fast, cheap and elastic cloud services. Developers conceptualize services based upon the ability ot leverage these resources. Strapping a clunky relational contraption on the back of their cheap/fast/simple/agile services is incongrous. It’s clear to me that growth in databases is there, but the choice is non-relational databases or NoSQL variants. Hyder could fill the bill, but only if it was a real live service, and only if transactional consistancy was a requirement. Ease of use, cheap storage, throughput and elasticity are the principle requirements. The question is not if Oracle will lose marketshare to Microsoft because because of Hyder – nobody is going to rip out an entrenched Oracle RDBMS as migration costs and instability far outweigh Hyder’s percieved benefits. This issue is developers of new applications are losing interest in relational databases. The choice is not ‘Hyder vs. Oracle’, it’s ‘can I do everything with flat files/NoSQL or do I need a supporting instance of MySQL/Postgres/Derby for transactional consistency’? The architectural discussion for non-enterprise applications has fundamentally shifted. I am not saying relational databases are dead. Far from it. I am saying that they are not the first – or even second – choice for web application developers, especially those looking to run on cloud services. With the current app development surge relational technologies are an afterthought. And that’s important as this is where a lot of the growth is happening. I have not gone into what this means for database security as that is the subject for future posts. But I will say that monitoring, auditing and assessment all change, as does the application of encryption and masking technologies. Share:

Share:
Read Post

Friday Summary, January 14, 2011

Apparently I got out of New York just in time. The entire eastern seaboard got “Snowmageddon II, the Blanketing” a few hours after I left. Despite a four-legged return flight, I did actually make it back to Phoenix. And Phoenix was just about the only place in the US where it was not snowing, as I heard there was snow in 48 states simultaneously. I was in NYC for the National Retail Federation’s 100th anniversary show. It was my first. I was happy to be invited, as my wife and her family have been in retail for decades, and I was eager to speak at a retail show. And this was the retail show. I have listened to my family about retail security for 20 years, and it used to be that their only security challenge was shrinkage. Now they face just about every security problem imaginable, as they leverage technology in every facet of operations. Supply chain, RFID, POS, BI systems, CRM, inventory management, and web interfaces are all at risk. On the panel were Robert McMillion of RSA and Peter Engert of Rooms to Go. We were worried about filling an hour and a half slot, and doubly anxious about whether anyone would show up to talk about security on a Sunday morning. But the turnout was excellent, with a little over 150 people, and we ended up running long. Peter provided a pragmatic view of security challenges in retail, and Robert provided a survey of security technologies retail merchants should consider. It was no surprise that most of the questions from the audience were on tokenization and removal of credit cards. I get the feeling that every merchant who can get rid of credit cards – those who have tied the credit card numbers to their database primary keys – will explore tokenization. Oddly enough, I ended up talking with tons of people at the hotel and its bar, more than I did at the conference itself. People were happy to be there. I guess they they were there for the entire week of the show, and very chatty. Lots of marketing people interested in talking about security, which surprised me. And they had heard about tokenization and wanted to know more. My prodding questions about POS and card swipe readers – basically: when will you upgrade them so they are actually secure – fell on deaf ears. Win some, lose some, but I think it’s healthy that data security is a topic of interest in the retail space. One last note: as you can probably tell, the number of blog entries is down this week. That’s because we are working on the Cloud Security Alliance Training Course. And fitting both the stuff you need to know and the stuff you need to pass the certification test into one day is quite a challenge. Like all things Securosis, we are applying our transparent research model to this effort as well! So we ask that you please provide feedback or ask questions about any content that does not make sense. I advise against asking for answers to the certification test – Rich will give you some. The wrong ones, but you’ll get them. Regardless, we’ll post the outlines over the next few days. Check it out! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR post on Vodafone’s breach. Rich quoted in the Wall Street Journal. Adrian at the National Retail Federation Show, telling the audience they suck at security. Did I say that? Mike, talkin’ to Shimmy about Dell, brand damage, and the Security Bloggers meet-up Favorite Securosis Posts Rich: The Data Breach Triangle. We didn’t push out a lot of content this week so I’m highlighting an older post. In line with Gunnar’s post on where we spend, I find it interesting that the vast majority of our security spending focuses on ingress… which in many ways is the toughest problem to solve. Mike Rothman: What do you want to see in the first CSA Training Course? Yes, we have a murder’s row of trainers. And you should go. But first tell us what needs to be in the training… David Mortman: What Do You Want to See in the First Cloud Security Alliance Training Course? Gunnar Peterson: What Do You Want to See in the First Cloud Security Alliance Training Course? Sensing a theme here? Adrian Lane: Mobile Device Security: 5 Tactics to Protect Those Buggers. Other Securosis Posts Funding Security and Playing God. Incite 1/12/2011: Trapped. Favorite Outside Posts Rich: Gunnar’s back of the envelope. Okay, I almost didn’t pick this one because I wish he wrote it for us. But although the numbers aren’t perfect, it’s hard to argue with the conclusion. Mike Rothman: Top 10 Things Your Log Managment Vendor Won’t Tell You. Clearly there is a difference between what you hear from a vendor and what they mean. This explains it (sort of)… David Mortman: Incomplete Thought: Why Security Doesn’t Scale…Yet.. Damn you @Beaker! I had a section on this very need in the upcoming CSA training. And, of course, you said it far better…. Adrian Lane: Can’t decide between this simple explanation of the different types of cloud databases, and this pragmatic look at cloud threats. Gunnar Peterson: Application Security Conundrum by Jeremiah Grossman, with honorable mention to The Virtues of Monitoring. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts China CERT: We Missed Report On SCADA Hole . SAP buying SECUDE. TSA Worker Gets 2 Years for Planting

Share:
Read Post

Funding Security and Playing God

I was reading shrdlu’s post on Connecting the risk dots over on the Layer 8 blog. I thought the point of contention was how to measure cost savings. Going back and reading the comments, that’s not it at all. “we can still show favorable cost reduction by spotting problems and fixing early.” You have to PROVE it’s a problem first … This is why “fixing it now vs fixing it sooner” is a flawed argument. The premise is that you MUST fix, and that’s what executives aren’t buying. We have to make the logic work better. She’s right. Executives are not buying in, but that’s because they don’t want to. They don’t want to comply with SOX or pay their taxes either, but they do it anyway. If your executives don’t want to pay for security testing, use a judo move and tell them you agree; but the next time the company builds software, do it without QA. Tell your management team that they have to PROVE there is a problem first. Seriously. I call this the “quality architect conundrum”. It’s so named because a certain CEO (who shall remain nameless) raised this same argument every time I tried to hire an architect who made more than minimum wage. My argument was “This person is better, and we are going to get better code, a better product, and happier customers. So he is worth the additional salary.” He would say “Prove it.” Uh, yeah. You can’t win this argument, so don’t head down that path. Follow my reasoning for a moment. For this scenario I play God. And as God, I know that the two architectural candidates for software design are both capable of completing the project I need done. But I also know that during the course of the development process, Architect A will make two mistakes, and Architect B will make 8. They are both going to make mistakes, but how many and how badly will vary. Some mistakes will be fixed in design, some will be spotted and addressed during coding, and some will be found during QA. One will probably be with us forever because we did not see the limitation early enough and we be stuck. So as God I know which architect would get the job done with fewer problems, resulting in less work and less time wasted. But then again, I’m God. You’re not. You can’t prove one choice will cause fewer problems before they occur. What we discover, being God or otherwise, is that from design through the release cycles a) there will be bugs, and b) there will be security issues. Sorry, it’s not optional. If you have to prove that there is a problem so you can fund security you are already toast. You build it in as a requirement. Do we really need to prove Deming was right again? It has been demonstrated many times, with quantifyable metrics, that finding issues earlier in the product development cycle reduces at large costs to an organization. I have demonstrated, within my own development teams, that fixing a bug found by a customer is an order of magnitude more expensive than finding and fixing it in house. While I have see diminishing returns on some types of security testing investments, and some investments work out better than others, I found no discernable difference in the cost of security bugs vs. those having to do with quality or reliability. Failing deliberately, in order to justify action later, is still failure. Share:

Share:
Read Post

BSIMM meets Joe the Programmer

I always read Gary McGraw’s research on BSIMM. He posts plenty of very interesting data there, and we generally have so little good intelligence on secure code development that these reports are refreshing. His most recent post with Sammy Migues on Driving Efficiency and Effectiveness in Software Security raises some interesting questions, especially around the use of pen testing. The questions of where and how to best deploy resources are questions every development team has, and I enjoyed his entire analysis of the results of different methods of resource allocation. Still, I have trouble relating to a lot of Gary’s research, as the BSIMM study focused on firms that have resources far in excess of anything I have ever seen. I come from a different world. Yeah, I have programmed at large corporations, but the teams were small and isolated from one another. With the exception of Oracle, budgets for tools and training were just a step above non-existent. Smaller firms I worked for did not send people to training – HR hired someone with the skills we needed and let someone else go. Brutal, but true. So while I love the data Gary provides, it’s so foreign that I have trouble disecting the findings and putting them to practical use. That’s my way of saying it does not help me in my day job. There is a disconnect: I don’t get asked questions about what percentage of the IT budget goes for software security initiatives. That’s both because the organizations I speak with have software development as a separate department than IT; and because the expedniture for security related testing, tools, development manpower, training, and management software are embedded within the development process enough that it’s not easy to differentiate generic development stuff from security. I can’t frame the question of efficiency in the same way Gary and Sammy do. Nobody asks what their governance policy should be. They ask: What tools should I use to track development processes? Within those tools, what metrics are available and meaningful? The entire discussion is a granular, pragmatic set of questions around collecting basic data points. The programmers I speak with don’t bundle SDL touchpoints in this way, and they don’t qualify as balanced. They ask “of design review, code review, pen testing, assessment, and fuzzing – which two do I need most?” 800 developer buckets? 60, heck even 30, BSIMM activities? Not even close. Even applying a capability maturity model to code development is on the fringe. Mainly that’s because the firms/groups I worked in were too small to leverage a model like BSIMM – they would have collapsed under the weight of the process itself. I talk to fewer large fims on a semi-regular basis, and plenty of small programming teams, and using BSIMM never comes up. Now that I am on the other side of the fence as an analyst, and I speak with a wider variety of firms, BSIMM is an IT mindset I don’t encounter with software development teams. So I want to pose this question to the developers out there: Is BSIMM helpful? Has BSIMM altered the way you build secure code? Do you find the maturity model process or the metrics helpful in your situation? Are you able to pull out data relevant to your processes, or are the base assumptions too far out of line with your situation? If you answered ‘Yes’ to any of these questions, were you part of the study? I think the questions being asked are spot on – but they are framed in a context that is inaccessible or irrelevant for the majority of developers. Share:

Share:
Read Post

Mr. Cranky Faces Reality

There are some mornings I should not be allowed to look at the Internet. Those days when I think someone peed in my cornflakes. The mornings when every single media release, blog post, and news item, looks like total BS. I think maybe they are just struggling for news during the holiday season, or maybe I am just unsually snarky. I don’t know. Today was one of those days. I was combing through my feed reader and ran across Brian Prince’s article, Database Security Reminder: Don’t Let Your Guard Down. The gist is that if you move your database into the cloud you could be hacked, especially if you don’t patch the database. Uh, come again? Brian’s point is that if you don’t have a firewall to protect against port scanning you help hackers locate databases. And if you set Oracle to allow unlimited password attempts, your accounts can be brute-forced. And if you expose an unpatched version of Oracle to the Internet, vulnerabilities can be exploited. Now I am annoyed. Was this supposed to be news because the database was running on Amazon’s EC2, and that’s cloud, so it must be newsworthy? Was this a subtle way of telling us that the database vulnerability assessment and activity monitoring vendors are still important and relevant in the cloudy world? Was there a message in there about the quality of Amazon’s firewall, such that databases can be located by port scans? Or perhaps a veiled criticism that Amazon’s outbound monitoring failed to detect suspicious activity? I figure most companies by now have gotten the memo that databases get hacked. And they know you need to correctly configure and patch them prior to deployment. So how is this different than the database within your own IT data center, and why is this reminder newsworthy? Turns out it is. I continue to read more and more news, and see database hack after database hack after database hack. And that is right on the heels of the Gawker/Lifehacker/Gizmodo screwup. I have lost count of the other hospitals, universities, and Silverpop customers in the last month who are victims of database breaches. Okay, I concede Brian has a point. Maybe a reminder to get the basics right is worthy of a holiday post because there are plenty of companies still messing this up. I was thinking this was pure hyperbole and telling us stuff we already know. Apparently I was wrong. I am calm now, though still depressed. Thanks for sharing, Brian. I think I’ll go back to bed. Share:

Share:
Read Post

Friday Summary: December 24, 2010

It’s the holiday season and I should be taking some time off and relaxing, watching some movies and seeing friends. Sounds good. If only I had that ‘relax’ gene sequence I would probably be off having a good time rather than worrying about security on Giftmas eve. But here I am, reading George Hulme’s Threatpost article, 2011: What’s Your IT Security Plan?. I got to thinking about this. Should I wait to do security work for 2011? I mean, at your employer is one thing – who cares about those systems when there is eggnog and pumpkin pie? I’m talkin’ about your stuff! One point I make in the talks I give on software security is: don’t prioritize security out in favor of features when building code. And in this case, if I put off security in favor of fun, security won’t get done in 2011. So I went through the process of evaluating home computer and network security over the last couple days. I did the following: Reassess Router Security: Logged into my router for the first time in like two years to verify security settings. Basically all of the security settings – most importantly encryption – were correct. I did find one small mistake: I forgot to require the management connection to be forced over HTTPS, but as I had not been logged in for years, I am pretty sure that was not a big deal. I did however confirm the firmware was written by Methuselah – and while he was pretty solid coder, he hasn’t fixed any bugs in years. It was good to do a sanity check and take a fresh look. Migration to 1Password: I have no idea why I waited so long to do this. 1Password rocks! I now have every password I use secured in this and synchronized across all my computers and mobile devices. And the passwords are far better than even the longest passphrases I can remember. Love the new interface. Added bonus on the home machine: I can leave the UI open all the time, then autofill all web passwords to save time. If you have not migrated to this tool, do it. Deploy Network Monitoring: We see tons of stuff hit the company firewall. I used to think UTM and network monitoring was overkill. Not so much any more. Still in the evaluation and budgetary phase, but I think I know what I want and should deploy by year’s end. I want to see what hits, and what comes through. Yes, I am going to have to actually review the logs, but Rich wrote a nice desktop widget a couple years ago which I think I can repurpose to view log activity with my morning coffee. It will be just like working IT again! Clean Install: With the purchase of a new machine last week I did not use the Apple migration assistant. As awesome and convenient as that Mac feature is, I did a fresh install. Then I re-installed all ,u applications and merged the files I needed manually. Took me 8 hours. This was a just-in-case security measure, to ensure I don’t bring any hidden malware/trojans/loggers along for the ride. The added beneft was all the software I do not have set to manually update itself got revved. And many applications were well past their prime. Rotate Critical Passwords: I don’t believe that key rotation for encryption makes you any safer if you do key management right, but passwords are a different story. There are a handful of passwords that I cannot afford to have cracked. It’s been a year, so I swapped them out. Mobile Public Internet: Mike mentioned this in one of his Friday Favorites, but this is one of the best posts I have seen all year for general utility: Shearing Firesheep with the Cloud. What does this mean? Forget Firesheep for a minute. General man-in-the-middle attacks are still a huge problem when you leave the comfy confines of your home with that laptop. What this post describes is a simple way to protect yourself using public Internet connections. Use the cloud to construct an encrypted tunnel for you to use wherever you go. And it’s fast. So as long as you set it up and remember to use it, you can be pretty darn safe using public WiFi to get email and other services. That’s six things I did over the course of the week. Of course you won’t read this anywhere else because it’s six things, and no other security information source will give you six things. Five, or seven, but never six. Some sort of mythical marketing feng-shui numbers that can’t be altered without making some deity angry. Or maybe it was that you get cramps? I forget. There is probably a Wiki page somewhere that describes why that happens. This is the last Friday Summary of the year so I wanted to say, from Rich Mogull, Mike Rothman, Chris Pepper, David Mortman, Gunnar Peterson, Dave Lewis, and Melissa (aka Geekgrrl), and myself: thanks for reading the blog! We enjoy the comments and the give-and-take as much as you do. It makes our job fun and, well, occasionally humiliating. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted so many times on Wikileak DDOS that he DDOSed all media outlets with the sheer volume of his quotes. They had to shut him down. The rest of us were too far gone as slackerly curmudgeons (or was that curmudgeonly slackers?) to speak to anyone. Favorite Securosis Posts We all loved [Dealtime 2010: Remembering the Departed](Dealtime 2010: Remembering the Departed as the best post of the week. Except for Mike, who was unhappy we would not let him graph the specific hype cycles. Other Securosis Posts Incite 12/22/2010: Resolution. 2011 Research Agenda: Quantum Cloudiness, Supervillan Shields, and No-BS Risk. React Faster and Better: New Data for New Attacks, Part 1. NSA Assumes Security Is Compromised. 2011 Research Agenda: the

Share:
Read Post

NSA Assumes Security Is Compromised

I saw an interesting news item: the NSA has changed their mindset and approach to data security. Their new(?) posture is that Security Has Always Been Compromised. Debora Plunkett of the NSA’s “Information Assurance Directorate” stated: There’s no such thing as ‘secure’ any more. The most sophisticated adversaries are going to go unnoticed on our networks. We have to build our systems on the assumption that adversaries will get in. We have to, again, assume that all the components of our system are not safe, and make sure we’re adjusting accordingly. I started thinking about how I would handle this problem and it became mind-boggling. I assume compartmentalization and recovery is the strategy, but the details are of course the issue. Just the thought of going through the planning and reorganization of a data processing facility the size of what the NSA (must) have in place sent chills down my spine. What a horrifically involved process that must be! Just the network and security technology deployment would be huge; the disaster recovery planning and compartmentalization – especially what to do in the face of incomplete forensic evidence – would be even more complex. How would you handle it? Better forensics? How would you scope the damage? How do you handle source code control systems if they are compromised? Are you confident you could identify altered code? How much does network segmentation buy you if you are not sure of the extent of a breach? To my mind this what Mike has been covering with his ‘Vaults’ concept of segmentation, part of the Incident Response Fundamentals. But the sheer scope and complexity casts those recommendations in a whole new light. I applaud the NSA for the effort: it’s the right approach. The implementation, given the scale and importance of the organization, must be downright scary. Share:

Share:
Read Post

Quantum Unicorns

Apparently we are supposed to fear the supercomputer of the future. According to Computerworld, the clock is ticking on encryption. Yes, you guessed it, the mythical “quantum computer” technology is back in the news again, casting its shadow over encryption. It will make breaking encryption much, much easier. “There has been tremendous progress in quantum computer technology during the last few years,” says Michele Mosca, deputy director of the Institute for Quantum Computing at the University of Waterloo in Waterloo, Ontario, Canada. “It’s a game changer” And when they perfect it, the Playstation 37 will rock! Unfortunately it’s powered by leprechauns’s gold and Unicorn scat, so research efforts have been slowed by scarcity of resources. Seriously, I have been hearing this argument since I got into security 15 years ago. Back then we were hearing about how 3-DES was doomed when quantum technology appeared. It was, but that has more to do with Moore’s Law and infant encryption technologies than anything else. I think everybody gets that if we have computers that are a million times faster than what we have today Flash will run reasonably fast we’ll be able to break existing encryption technology. But how much data you encrypt today will have value in 20 years? Or more likely in 40 years? I am still willing to bet we’ll see 100” foldable carbon nanotube televisions or pools of algae performing simple arithmetic before quantum cryptography. And by that time, maybe all government laptops will have full disk encryption. Share:

Share:
Read Post

Research Agenda 2011: the Open Research Version

It’s time to post my research agenda for 2011. My long-winded Securosis compatriot has chosen a thematic approach to discussing coverage areas, and while it’s an excellent – and elegant – idea, I am getting lost amongst all of the elements presented. So unlike Mike, I won’t be presenting my coverage areas so artistically. Instead I will stick to a focus on the technology variants I hear customers askING about, as well as the trends I see within different sub-segments of the security industry. For the areas of security I cover, I know what customers ask us about, and I see a few evolving trends. Most have to do with Cloud – surprise! – and how to take advantage of cheap, plentiful resourses without getting totally hosed in the process. We are a totally transparent research firm, I will throw out some ideas and ask what you think are the most important. We try to balance what customers think is important, what we think is important, and what vendors think is important. It’s easy when the three overlap, but that is seldom the case. So I will carve out what I think we should cover, and ask you for your ideas and feedback. Cloud trends Logging in the Cloud: Cheap, fast, and easy usually wins; so cheap cloud resources coupled with basic logging services seem a key proposition for security and operations. We talked a lot about SIEM this year as there was lots of angst by SIEM customers looking to squeeze more value from their deployments while reducing costs. This year I see more firms moving operations to the cloud and needing to cut through the fog to determine what the frack is going on. Or what to store. Or how it should be secured. Web Application Security: Understanding and selecting a web application security program is the most popular research paper we have ever produced, and downloads remain very high two years after launch. Our intention is to either refresh that paper and relaunch – as the content is even more applicable today than it was then – or drill down into specific technologies such as Dynamic Web Application testing (black box & grey box) and WAF for in-house services and SaaS. Content Security: This umbrella covers email security, anti-spam, DLP (Lite), secure web gateways, global intelligence, and anti-virus. And yes, virus and spam are still a problem. And yes, the DLP features bundled with content security are ready for prime time. We have written a lot about content security, and when we did we were witnessing the evolution of SaaS and cloud based content security offerings. Now these are proven services. We plan to do a thorough job, producing Understanding and Selecting a Cloud Content Security solution. Consolidation and maturing market trends Quick Wins with Tokenization: Tokenization is one of the few technologies with serious potential to cut costs and simplify security. While adoption rates are still low, we get tons of inquiries. Our previous work in tokenization has outlined the available technology variants. We are looking at application of the technology and quick wins for adoption. PCI is the principal application and the use case is fairly simple despite multiple tokenization options, but the long term implications for health care data is both equally compelling and slightly more complicated. We believe that the mid market is moving towards SaaS based solutions, and enterprise customers to in-house software. Edge tokenization, tokenization adoption rates, PCI scope reduction, and fraud detection are all open topics. We are open to suggestions on how to focus this paper. Assessment: Much as we have seen a more holistic vision of where database security is headed, assessment vendors have evolved as well. We expect vendors to pitch different stories in order to differentiate themselves, but in this case each vendor genuinely has a different model for how assessment fits within the greater application security context. Internally, we have discussed a couple paper ideas on understanding the technologies, as well as a market update for the space as a whole. It’s been apparent for some time that the assessment market is going in slightly different directions – I see four separate visions! Which best matches enterprise customer requirements? Where is the assessment market headed? Totally confusing to customers trying to compare vendors and make sense of what would seem like a stable and mature segment. Emerging trends Building Security in: The single topic I believe benefits the most people is security in code development. Gunnar and I write a lot about how to build security into product development processes and have lots to say on the subject. “Quick Wins for Rugged”, “Agile Process Adjustments for Secure Code Development”, “Security Metrics in Code Development that Matter”, “Truth, Lies and Fiction with Application Security”, and last but not least, “Risk Management in Software Development” all merit research. Continuous Controls Monitoring: We are often asked questions by customers interested in compliance monitoring, and this one is near the top of the list. As security and compliance controls are scattered throughout the organization, and putting them under a single management umbrella. ADMP: We have discussed several ideas for updating the original Database Activity Monitoring paper, as well as the evolution of DAM from a product to a feature. Yes, I called it evolution. A couple years ago Rich blogged about where he felt database security and WAF market needed to go. He called this Application & Database Monitoring & Protection. Several companies have realized all or part of this vision and are starting to “take it to the next level”. But visions for how to leverage the technology are changing. Once again, several vendors offer different views of how the technology should be used. Virtualization of Internet Domains: There is a great deal of discussion of needing a new Internet for security reasons. And there a many services – SCADA and ATMs come to mind – that should never have been put on the Internet. And there are

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.