By Mike Rothman
Speaking as a “master of the obvious,” it’s worth mentioning the importance of having a correct mindset heading into the new year. Odds are you’ve just gotten back from the holiday and that sinking “beaten down” feeling is setting in. Wow, that didn’t take long.
So I figured I’d do a quick reminder of the universal truisms that we know and love, but which still make us crazy. Let’s just cover a few:
There is no 100% security
I know, I know – you already know that. But the point here is that your management forgets. So it’s always a good thing to remind them as early and often as you can. Even worse, there are folks (we’ll get to them later) who tell your senior people (usually over a round of golf or a bourbon in some mahogany-laden club) that it is possible to secure your stuff.
You must fight propaganda with fact. You must point out data breaches, not to be Chicken Little, but to manage expectations. It can (and does) happen to everyone. Make sure the senior folks know that.
Compliance is a means to an end
There is a lot of angst right now (especially from one of my favorite people, Josh Corman) about the reality that compliance drives most of what we do. Deal with it, Josh. Deal with it, everyone. It is what it is. You aren’t going to change it, so you’d better figure out how to prosper in this kind of reality.
What to do? Use compliance to your advantage. Any new (or updated) regulation comes with some level of budget flexibility. Use that money to buy stuff you really need. So what if you need to spend some time writing reports with your new widget to keep the auditor happy. Without compliance, you wouldn’t have your new toy.
Don’t forget the fundamentals
Listen, most of us have serious security kung fu. They probably task folks like you to fix hard problems and deflect attackers from a lot of soft tissue. And they leave the perimeter and endpoints to the snot-nosed kid with his shiny new Norwich paper. That’s OK, but only if you periodically make sure things function correctly.
Maybe that means running Core against your stuff every month. Maybe it means revisiting that change control process to make sure that open port (which that developer just had to have) doesn’t allow the masses into your shorts.
If you are nailed by an innovative attack, shame on them. Hopefully your incident response plan holds up. If you are nailed by some stupid configuration or fundamental mistake, shame on you.
Widgets will not make you secure
Keep in mind the driving force for any vendor is to sell you something. The best security practitioners I know drive their projects – they don’t let vendors drive them. They have a plan and they get products and/or services to execute on that plan.
That doesn’t mean reps won’t try to convince you their widget needs to be part of your plan. Believe me, I’ve spent many a day in sales training helping reps to learn how to drive the sales process. I’ve developed hundreds of presentations designed to create a catalyst for a buyer to write a check. The best reps try to help you, as long as that involves making the payment on their 735i.
And even worse, as a reformed marketing guy, I’m here to say a lot of vendors will resort to bravado in order to convince you of something you know not to be true. Like that a product will make you secure. Sometimes you see something so objectionable to the security person in you, it makes you sick.
Let’s take the end of this post from LogLogic as an example. For some context, their post mostly evaluates the recent Verizon DBIR supplement.
What does LogLogic predict for 2010? Regardless of whether, all, some, or none, of Verizon’s predictions come true, networks will still be left vulnerable, applications will be un-patched, user error will causes breaches in protocol, and criminals will successfully knock down walls.
But not on a LogLogic protected infrastructure.
We can prevent, capture and prove compliance for whatever 2010 throws at your systems.
LogLogic customers are predicting a stress free, safe 2010.
Wow. Best case, this is irresponsible marketing. Worst case, this is clearly someone who doesn’t understand how this business works. I won’t judge (too much) because I don’t know the author, but still. This is the kind of stuff that makes me question who is running the store over there.
Repeat after me: A widget will not make me secure. Neither will two widgets or a partridge in a pear tree.
So welcome to 2010. Seems a lot like 2009 and pretty much every other year of the last decade. Get your head screwed on correctly. The bad guys attack. The auditors audit. And your management squeezes your budget.
Posted at Thursday 7th January 2010 5:44 pm
(2) Comments •
By Mike Rothman
It’s been quite a week, and it’s only Wednesday. The announcement of Securosis “Plus” went extremely well, and I’m settling into my new digs. Seems like the last two days just flew by. As I was settling in to catch some zzzz’s last night, I felt content. I put in a good day’s work, made some progress, and was excited for what the next day had to bring. Dare I say it? I felt happy. (I’m sure I’ve jinxed myself for another 7 years.)
It reminds me of a lyric from Shinedown that really resonated:
There’s a hard life for every silver spoon<br/>
There’s a touch of grey for every shade of blue<br/>
That’s the way that I see life<br/>
If there was nothing wrong,<br/>
Then there’d be nothing right<br/>
-Shinedown, What a Shame
It’s about contrast. If I didn’t have less than stellar job experiences (and I’ve had plenty of those), clearly I couldn’t appreciate what I’m doing now. It’s also a big reason why folks that have it pretty good sometimes lose that perspective. They don’t have much bad to contrast. Keep that in mind and if you need a reminder of how lucky you are, head down to the food bank for a few hours.
The most surprising thing to me (in a positive way) about joining the team is the impact of having someone else look at your work, challenge it and suggest ways to make it better. Yesterday I sent a post that will hit Friday on FUDSEC to the team. The first draft was OK, but once Rich, Adrian, Mort and Chris Pepper got their hands on it and suggested some tuning - the post got markedly better. Then I got it.
Just to reinforce the notion, the quote in today’s InformationWeek Daily newsletter hit home as well:
If you want to go quickly, go alone.<br/>
If you want to go far, go together.<br/>
True dat. Have a great day.
Incite 4 U
This week Mike takes the bulk of the Incite, but did get some contributions from Adrian. Over the coming weeks, as we get the underlying systems in place, you’ll be getting Incite from more of the team. We’ll put our initials next to each snippet we write, just so you know who to send nasty email.
Monetizing Koobface: I’m fascinated by how the bad guys monetize their malware, so this story on Dark Reading highlighting some research from Trend Micro was interesting. The current scheme du jour is fake anti-virus. It must be working since over the holiday I got a call from my FiL (Father in Law) about how he got these pop-ups about needing anti-virus. Thankfully he didn’t click anything and had already made plans to get the machine re-imaged. - MR
Identity + Network = MUST: Gartner’s Neil MacDonald has a post entitled Identity-Awareness Should be a Feature, not a Product, where he’s making the point that as things virtualize and hybrid computing models prevail, it’s not an option to tie security policies to physical attributes. So pretty much all security products will need to tie into Active Directory, RADIUS and LDAP. Yes, I know most already do, but a while back IP to ID was novel. Now, not so much. - MR
Puffery Indeed: I had a personal ban on blogging about the Cloud in 2009 as there were a lot of people doing a lot of talking but saying very little. This NetworkWorld post on “Tone-deaf Unisys official on why cloud computing rocks; Or what shouldn’t get lost in all the puffery over cloud technology” is the embodiment of the puffery. The point of the post - as near as I can tell - was to say companies need to “embrace cloud computing” and “security concerns are the leading cause of enterprise and individual users’ hesitancy in adopting cloud computing”. Duh! The problem is that the two pieces of information are based on unsubstantiated vendor press releases and double-wrapped in FUD. Richard Marcello of Unisys manages to pose cloud technologies as a form of outsourcing US jobs, and Paul Krill says these are a mid-term competitive requirement for businesses. Uh, probably not on either account. Still, giving them the benefit of the doubt, I checked the ‘survey’ that is supposed to corroborate hesitancy of Cloud adoption, but what you get is an unrelated 2007 survey on Internet trust. A subsequent ‘survey’ link goes to a Unisys press releases for c-RIM products. WTF? I understand ‘Cloud’ is the hot topic to write about, but unless your goal is to totally confound readers while mentioning a vendor a bunch of times, just stop it with the random topic association. - AL
Speeds and Feeds Baby: Just more of an observation because I’ve been only tangentially covering network security over the past few years. It seems speeds and feeds still matter. At least from the standpoint of beating your chest in press releases. Fortinet is the latest guilty party in talking about IPv6 thruput. Big whoop. It kills me that “mine is bigger than yours” is still used as a marketing differentiator. I’m probably tilting at windmills here a bit, since these filler releases keep the wire services afloat, so it’s not all bad. - MR
Time for the Software Security Group: It’s amazing how we can get access to lots of data and still ignore it. Gary McGraw, one of the deans of software security, has a good summary of his ongoing BSIMM (Building Security In) research on the InformIT blog. He covers who should do software security, how big your group should be, and also how many software security folks there are out there (not enough). In 2010, band-aids (WAFs, etc.) will still prevail, but if you don’t start thinking of how to structurally address the issue, which means a PROGRAM and a group responsible to execute on that program, things are never going to improve. - MR
Saving Private MySQL: Charles Babcock’s post on “MySQL’s Former Owner Can’t ‘Save’ It After Selling It” was thought provoking. It seems a “no-brainer” that, since Oracle owns MySQL, they should be allowed to do what they please with the code. But factoring in potential anti-competitive aspects of killing MySQL makes it a deeper decision. Charles makes the point that it is somewhat disingenuous to sell an open source product that is viewed as community property, and the seeming hypocrisy of the seller now complaining about the fate of the product. I have maintained that there is no reason for Oracle to kill MySQL off as it can drive upsell opportunities for the Oracle database if properly managed. Realistically speaking, fiefdoms within Oracle will fight for their turf, so all possibilities must be considered. I believe MySQL is too valuable to let wither and die. The piece is worth a read! - AL
Attacking People: Rich just posted a good piece on Macworld about the typical scams Mac users see. Yes, they are the same as what non-Mac users see - phishing, identity theft, auction fraud, etc. I remarked on Twitter that it’s been the same for 10,000 years: folks stealing from folks. CSOAndy makes that point on his blog as well, but talking about the Twitter DNS attack before the holidays. No, DNSSEC would not have stopped this attack because it was an attack on people. Their DNS service got owned, therefore they did. So all the technology in the world is great, but people are still our weakest link, by far. - MR
Beware the FUD: We live in a 24/7 world and that means the media is always looking for something to drive page views. Bill Brenner at CSO mentions 3 examples of stories that got a lot of airtime, but probably shouldn’t have because they were mostly crap. Like the Black Screen of Death, which wasn’t really a problem. PrevX lets the story run for a couple of days and then calls a “my bad.” Guess I don’t blame them, since it was generating plenty of press. Though not sure how admitting you were wrong impacts the credibility bank. He also calls out some Chicken Little behavior from Paul Kurtz and his cyber-katrina scenario. I can just see 30,000 folks stuck in the Superdome without the ability to Tweet. Keep in mind, this is a bed of our own making. We like hyper-connectivity, but there is always a downside. - MR
Posted at Wednesday 6th January 2010 3:09 pm
(0) Comments •
By Mike Rothman
EMC/RSA announced the acquisition of Archer Technologies for an undisclosed price. The move adds an IT GRC tool to EMC/RSA’s existing technologies for configuration management (Ionix) and SIEM/Log Management (EnVision).
Though EMC/RSA’s overall security strategy remains a mystery, they claim to be driving towards packaging technologies to solve specific customer use cases – such as security operations, compliance, and cloud security. This kind of packaging makes a lot of sense, since customers don’t wake up and say “I want to buy widget X today” – instead they focus on solving specific problems. The rubber meets the road based on how the vendor has ‘defined’ the use case to suit what its product does.
Archer as an IT GRC platform fills in the highest level of the visualization by mapping IT data to business processes. The rationale for EMC/RSA is clear. Buying Archer allows existing RSA security and compliance tools, as well as some other EMC tools, to pump data into Archer via its SmartSuite set of interfaces. This data maps to business processes enumerated within Archer (through a ton of professional services) to visualize process and report on metrics for those processes. This addresses one of the key issues security managers (and technology companies) grapple with: showing relevance. It’s hard to take security data and make it relevant to business leaders. A tool like Archer, properly implemented and maintained, can do that.
The rationale for Archer doing the deal now is not as clear. By all outward indications, the company had increasing momentum. They brought on Bain Capital as an investor in late 2008, and always claimed profitability. So this wasn’t a sale under duress. The Archer folks paid lip service to investing more in sales and marketing and obviously leveraging the EMC/RSA sales force to accelerate growth. The vendor ranking exercises done by big research also drove this outcome, as Archer faced an uphill battle competing against bigger players in IT GRC (like Oracle) for a position in the leader area. And we all know that you need to be in the leader area to sell to large enterprises.
Ultimately it was likely a deal Archer couldn’t refuse, and that means a higher multiple (as opposed to lower). The deal size was not mentioned, though 451 Group estimates the deal was north of $100 million (about 3x bookings) – which seems too low.
IT GRC remains a large enterprise technology, with success requiring a significant amount of integration within the customer environment. This deal doesn’t change that because success of GRC depends more on the customer getting their processes in place than the technology itself working. Being affiliated with EMC/RSA doesn’t help the customer get their own politics and internal processes in line to leverage a process visualization platform.
Archer customers see little value in the deal, and perhaps some negative value since they now have to deal with EMC/RSA and inevitably the bigger organization will slow innovation. But Archer customers aren’t going anywhere, since their organizations have already bet the ranch and put in the resources to presumably make the tool work.
More benefit accrues to companies looking at Archer, since any corporate viability concerns are now off the table. Users should expect better integration between the RSA security tools, the EMC process automation tools, and Archer – especially since the companies have been working together for years, and there is already a middleware/abstraction layer in the works to facilitate integration. In concept anyway, since EMC/RSA don’t really have a sterling track record of clean and timely technology integration.
As with every big company acquisition, issues emerge around organizational headwinds and channel conflict. Archer was bought by the RSA division, which focuses on security and sells to the technology user. But by definition Archer’s value is to visualize across not just technology, but other business processes as well. The success of this deal will literally hing on whether Archer can “break out” of the RSA silo and go to market as part of EMC’s larger bag of tricks.
Interestingly enough, back in May ConfigureSoft was bought by the Ionix group, which focuses on automating IT operations and seemed like a more logical fit with Archer. As a reminder to the hazards of organizational headwinds, just think back to ISS ending up within the IBM Global Services Group. We’ll be keeping an eye on this.
Issues also inevitably surface around channel conflict, especially relative to professional services. Archer is a services-heavy platform (more like a toolkit) that requires a significant amount of integration for any chance of success. To date, the Big 4 integrators have driven a lot of Archer deployments, but historically EMC likes to take the revenue for themselves over time. How well the EMC field team understands and can discuss GRC’s value will also determine ongoing success.
IT GRC is not really a market – it’s the highest layer in a company’s IT management stack and only really applicable to the largest enterprises. Archer was one of the leading vendors and EMC/RSA needed to own real estate in that sector sooner or later. This deal does not have a lot of impact on customers, as this is not going to miraculously result in IT GRC breaking out as a market category. The constraint isn’t technology – it’s internal politics and process.
We also can’t shake the nagging feeling that shifting large amounts of resources away from security and into compliance documentation may not be a good idea. Customers need to ensure that any investment in a tool like Archer (and the large services costs to use it) will really save money and effort within the first 3 years of the project, and is not being done to the exclusion of security blocking and tackling. The truth is it’s all too easy for projects like this to under-deliver or potentially explode – adding complexity instead of reducing it – no matter how good the tool.
Posted at Tuesday 5th January 2010 6:28 pm
(1) Comments •
We decided to slow this series down for the holidays, as we are at a point where participation from the user community is very important. With the new year we are kicking back into high gear, and encourage comments and critiques of the processes we are describing. Picking up where we left off, we are at the Discovery phase in the database security process, a critical part of scoping the overall work.
Personally, discovery and assessment is my favorite step in the database security process. This step was the one that always yielded surprises for my team. Enterprise databases were present we did not know about. Small personal databases, perhaps even embedded in applications, we did not know about. Production data sets on test servers, tables with sensitive data copied into unsecured table-spaces, or cases where replication was turned on without our knowledge. This is over and above databases that were completely misconfigured – usually a new DBA who did not know any better, but sometimes intentionally disabled to make administration easier. I have had clean scans on a Monday, only to find Friday that there were dozens of critical issues. And that’s really what we want to determine in the Discovery phase.
Before we can act on the plan we developed in the previous section (Planning, Part 1 and Part 2), we must determine the state of the database environment. After all, you have to know what’s wrong before you can fix it. What databases are in your environment, what purposes do they serve, what data do they host, and how are they set up? The first step in this phase is to find the databases.
In this stage we determine the location of database servers in the organization.
- Plan: How are you going about scanning the environment? What parts of the process are automated vs. manual? Make sure you have clear guidelines. Refine scope to portions of IT, database type of interest. Also note that the person who created the plan may not be the person who runs the scan. Make sure that the data needed (database name, port number, database type, etc.) for subsequent steps is communicated or this entire process will need to be run again.
- Setup: Acquire and install tools to automate the process, or map out your manual process. Configure for your environment, specifying acceptable network address and port ranges. Network segmentation will alter deployment. Databases have multiple connection options so plan accordingly. If you are keeping scan results in a database, create structures and configure.
- Enumerate: Run your scan, or schedule for repeat scanning. Capture the results and filter out unwanted information to keep the data in scope for the project. Record as you baseline for future trend reports. In practice you will run this step more than once. As you discover databases you did not know existed, determine credentials you were provided are insufficient, or find subsequent steps require more information than you collected. Schedule to repeat scanning at periodic intervals. If you are using a manual process, this consists of contacting business units to identify assets and manually assessing each system.
- Document: Format data, generate reports, and distribute. Use results to seed data discovery and assessment tasks.
Database discovery can be performed manually or automated. Segmented networks, regional offices, virtual servers, multi-homed hosts, remapping of standard ports, and embedded databases are all examples of common impediments you need to consider. If you choose to automate, most likely you are going to use a tool to examines network addresses and interrogate network ports, which may not yield all database instances but should capture the database installations. If you are using network monitoring to discover databases you will miss some. Regardless of your choice, you may not find everything, at least in the first sweep, so consider scanning more than once. In a manual process you will need to work with business units to identify databases, and perform some manual testing to identify any unreported databases. Understand what data you need to produce in this part of the process as your results from database discovery will be used to feed data discovery and assessment.
Identify Applications, Owners, and Data:
Now we take the identified databases and identify application dependencies, database owners, and the kinds of data stored.
- Plan: Develop a plan to identify the application dependencies, data owners, and data types/classifications for the databases enumerated in the previous stage. Determine manual vs. automated tasks. If you have particular requirements, specify and itemize required data and assign tasks to qualified personnel. Define data types that require protection. Determine data collection methods (monitoring, assessment, log files, content analysis, etc.) to locate sensitive information.
- Setup: Databases, data objects, data, and applications have ownership permissions that govern their access and use. For data discovery create regular expression templates, locations, or naming conventions for discovery scans. Test tools on known data types to verify operation.
- Identify Applications: For applications, catalog connection methods and service accounts where appropriate.
- Identify Database Owner(s): List database owners. Database owners provide credentials and accounts for dedicated scans, so determine who owns database installations and obtain credentials.
- Discover Data: For data discovery return location, schema, data type, and other meta-data information. Rule adjustment requires re-scanning.
- Document: Generate reports for operations and data security.
In essence this series of tasks is multiple discovery processes. Discovering the applications that attach to a database and how it is used, and what is stored within the database, are two separate efforts. Both can be performed by a credentialed investigation of the platform and system, or by observing network traffic. The former provides complete results at the expense of requiring credentials to access the database system, while passive network scanning is easier but provides incomplete results.
If you have existing security policies or compliance requirements data discovery is a little easier, as you know what you are looking for. If this is the first time you are scanning databases for applications and data, you may not have precise goals, but the results still aid in other activities. Manual discovery policies requires you to define the data types you are interested in detecting, so planning and rule development requires significant time in this phase.
Identification of applications and data provides information necessary to determine security and regulatory requirements. This task defines not only the scope of the scanning in the next task, but also subsequent monitoring and reporting efforts in different phases of this project.
Assess Vulnerabilities & Configurations:
Database Assessment is the analysis of database configuration, patch status, and security settings. It is performed by examining the database system both internally and externally – in relation to known threats, industry best practices, and IT operations guidelines. It is important to note that with assessment, there is a large divide between having requirements and the tools or queries that gather the information. The setup portion of this task, particularly in script development, takes far more time than the scans themselves.
- Define Scans: In a nutshell, this is where you define what you wish to accomplish. Compile a list of databases that need to be scanned, and determine requirements for different database types. Investigate best practices, security and compliance review for both internal and external requirements. Assign scans for proper separation of duties.
- Setup: Determine how you want to accomplish your goals. Which functions are to be automated and which are manual? Are these credentialed scans or passive? Download updated policies from tools and database vendors, and create custom policies where needed. Create scripts to collect the information, determine priority, and suggest remediation steps for policy violations.
- Scan: Scans are an ongoing effort, and most scanning tools provide scheduling capabilities. Collect results and store.
- Distribute Results: Scan results will spotlight critical issues, variations from policy, and general recommendations. Filter unwanted data according to audience, then generate reports. Reporting includes feeding automated trouble ticket and workflow systems.
Database discovery, data discovery, and database security analysis are conceptually simple. Find the databases, determine what they are used for, and figure out if they are secure. In practice they are much harder than they sound. If you run a small IT organization you probably know where your one or two database machines are located, and should have the resources to find sensitive data.
When it comes to security policies, databases are so complex and the threats evolve so rapidly that definition and setup tasks will comprise the bulk of work for this entire phase. Good documentation, and a method for tracking threats in relation to policies and remediation information, are critical for managing assessments.
Authorization and Access:
In this stage we determine how access control systems are configured, then collect permissions and password settings.
- Define Access Requirements: Man hours to obtain the list of databases you need to assess access controls for. Man hours to discover how access controls are performed; which functions are at the host or domain level, and how these services are linked to database permissions. Determine what password checks are to be employed.
- Setup: For automated scans: cost to acquire, install and configure the tools. Time to obtain host/database permissions needed to perform manual and automated scans. Man hours needed to collect documented roles, groups, or service requirements for user of the database in later analysis. Cost of tools and time to generate report templates for stakeholders who will act upon scan results. If password penetration testing or dictionary attacks for weak passwords are being used, select a dictionary.
- Scan: Variable: Man hours to run scan for database users showing group and role memberships, and then to scan groups, roles, and service account membership for each database. Man hours to collect domain and host user account information and settings.
- Analyze & Report: Administrative roles need to be reviewed for separation of duties, both between administrative functions and between DBAs and IT administrators. Service accounts used by applications must be reviewed. User accounts need to be reviewed for group memberships and roles. Groups and roles must be reviewed to verify permissions are appropriate for business functions.
Database authorization and access control is the front line of defense for data privacy and integrity, as well as providing control over database functions. It’s also the most time intensive of these tasks to check, as the process is multifaceted – needing to account not only for the settings inside the database, but how those functions are supported by external host and domain level identity management services. This exercise is typically split between users of the database and administrators, as each has very different security considerations. Password testing can be time-consuming depending and, depending upon the method employed, may require additional database resources to avoid impact on production servers.
Posted at Tuesday 5th January 2010 3:30 pm
(3) Comments •
By Adrian Lane
I am no fan of “security through obscurity”. Peer review and open discourse on security have proven essential in development of network protocols and cryptographic algorithms. Regardless, that does not mean I choose to disclose everything. I may disclose protocols and approach, but certain details I choose to remit.
Case in point: if I were Twitter, and wanted to reduce account hijacking by ridding myself of weak passwords which can be easily guessed, I would not disclose my list of weak passwords to the user community. As noted by TechCrunch:
If you’re on Twitter, that means you registered an account with a password that isn’t terribly easy to guess. As you may know, Twitter prevents people from doing just that by indicating that certain passwords such as ‘password’ (cough cough) and ‘123456’ are too obvious to be picked. It just so happens that Twitter has hard-coded all banned passwords on the sign-up page. All you need to do to retrieve the full list of unwelcome passwords is take a look at the source code of that page. Do a simple search for ‘twttr.BANNED_PASSWORDS’ and voila, there they are, all 370 of them.
The common attack vector is to perform a dictionary attack on known accounts. A good dictionary is an important factor for success. It is much easier to create a good dictionary if you know for certain many common passwords will not be present. Making the list easy to discover makes it much easier for someone to tune their dictionary. I applaud Twitter for trying to improve passwords and thereby making them tougher to guess, but targeted attacks just got better as well. Because here’s a list of 370 passwords I don’t have to test.
Posted at Monday 4th January 2010 11:59 pm
(11) Comments •
By Mike Rothman
What are you announcing?
Today, we are announcing that Mike Rothman is joining Securosis as Analyst/President (Rich remains Analyst/CEO). This is a full merger of Securosis and Security Incite.
Why is this a good move for Securosis?
Not to sound trite, but bringing on Mike is a no-brainer. This immediately and significantly broadens Securosis’ coverage and positions us to grow materially in ways we couldn’t do without another great analyst. There are very few people out there with Mike’s experience as an independent analyst and entrepreneur. Mike proved he could thrive as a one-man operation (his jump to eIQ wasn’t a financial necessity), completely shares our values, and brings an incredible range of experience to the table.
Those who read our blog and free research reports gain additional content in areas we simply couldn’t cover. Mike will be leading our network and endpoint security coverage, as well as bringing over the Pragmatic CSO (sorry, you still have to pay for it) and the Daily Incite (which we’re restructuring a bit, as you’ll see later in this FAQ). Given Rich and Adrian’s coverage overlap, adding Mike nearly doubles our coverage… with our contributors (David Mortman, Dave Meier, and Chris Pepper) rounding us out even more. Mike is also a “high producer”, which means we’ll deliver even more free content to the community.
Our existing clients now gain access to an additional analyst, and Mike’s clients now gain access to all of the Securosis resources and people. Aside from covering different technical areas, Mike brings “in the trenches” strategy, marketing, and business analysis experience that neither Rich nor Adrian have, as they specialize more on the tech side.
In terms of the company, this also allows us to finally execute on the vision we first started building 18 months ago (Securosis has been around longer, but that’s when we came up with our long-term vision). As we’ll discuss in a second, we have some big plans for new products, and we honestly couldn’t achieve our goals without someone of Mike’s experience.
Why is this a good move for Security Incite and Mike Rothman?
Mike digs a lot deeper into his perspectives in a POPE (People, Opportunity, Product, Exit) analysis, but basically there was a limitation in the impact Mike could have and what he could do as a solo practitioner. Finding kindred spirits in Rich and Adrian enables us to build the next great IT research firm. This, in turn, is a prime opportunity to build products targeting a grossly underserved market (mid-market security and IT professionals), while continuing to give back to the community by publishing free research.
This allows Mike to get back to his roots as a network security analyst and enables Securosis to provide full and broad coverage of all security and compliance topics, which benefits both end user and vendor clients. But mostly it’s as Rich said: a great opportunity to work with great guys and build something great.
What is the research philosophy of Securosis? Will that change now that Mike Rothman is part of the team?
Securosis’ core operating philosophy is Totally Transparent Research. That says it all. Bringing Mike to the team doesn’t change a thing. In fact, he wouldn’t have it any other way. As Mike has produced (as a META analyst) and bought (as a vendor) “mostly opaque” research from the big research shops, he certainly understands the limitations of that approach and knows there is a better way.
Who is your target customer?
Securosis will target mid-market security and IT professionals. These folks have perhaps the worst job in IT. They have most of the same problems as larger enterprises, but far fewer resources and less funding. Helping these folks ensure and accelerate the success of their projects is our core objective for new information products and syndicated research offerings in 2010.
Will all the research remain free and available on the Securosis blog?
Yes, all of the Securosis primary research will continue to be published on the blog. Our research may be packaged up and available in document form from our sponsors, but the core research will always appear first on the blog. This is a critical leg of the Totally Transparent Research model. Our community picks apart our research and makes it better. That makes the end result more actionable and more effective.
What kind of information products are you going to produce?
We’re not ready to announce our product strategy quite yet, but suffice it to say we’ll have a family of products designed to accelerate security and compliance project success. The entry price will be modest and participating in a web-based community will be a key part of the customer experience.
What about the existing retainer clients of Securosis? How will they be supported?
Securosis will continue to support existing retainer customers. We’ve rolled out a new set of retainer packages for clients interested in an ongoing relationship. All our analysts participate in supporting our retainer clients.
What’s going to happen to the Daily Incite?
The Daily Incite is becoming the Securosis Incite and will continue to provide hard-hitting and hopefully entertaining commentary on the happenings in the security industry. Now we have 6 contributors to add their own “Incite” to the mix.
We are also supplementing the Incite with other structured weekly blog posts including the “Securosis FireStarter,” which will spur discussion and challenge the status quo. We’ll continue producing the Securosis Weekly Summary to keep everyone up to date on what we’ve been up to each week.
What about the Pragmatic CSO?
The Pragmatic CSO is alive and well. You can still buy the book on the website and that isn’t changing. You may have noticed many of the research models Securosis has rolled out over the past year are “Pragmatic” in both name and nature. That’s not an accident. Taking a pragmatic approach is central to our philosophy of security and the Pragmatic CSO is the centerpiece of that endeavor.
So you can expect lots more Pragmatism from Securosis over the coming years.
Posted at Monday 4th January 2010 3:01 pm
(2) Comments •
By Adrian Lane
Technology start-ups are unique organisms that affect employees very differently than other types of companies. Tech start-ups are about bringing new ideas to market. They are about change, and often founded on an alternative perspective of how to conduct business. They are more likely to leverage new technologies, hire unique people, and try different approaches to marketing, sales, and solving business problems. People who work at start-ups put more of themselves into their jobs, work a little harder, and are more impassioned about achievement and success. The entire frenetic experience is accelerated to the point where you compress years into months, providing an intimate level of participation not available at larger firms – the experience is addictive.
When technology start-ups don’t succeed (the most common case), they take a lot out of their people. Failures result in layoffs or shutdown, and go from decision to unfortunate conclusion overnight. The technology and products employees have been pouring themselves into typically vanish. That’s when you start thinking about what went right and what went wrong, what worked and what didn’t. You think about what you would do differently next time. That process ultimately ends with some pent-up ideas and frustrations which – if you let them eat at you – eventually drive you back into the technology start-up arena. It took me 12 years and 5 start-ups to figure out that I was on a merry-go-round without end, unless I made the choice to step off and be comfortable with my decision. It took significant personal change to accept that no matter how good the vision, judgement, execution, and assembled team were, success was far from guaranteed.
Where am I going with this? As you have probably read by now, 18 months ago Rich Mogull, Mike Rothman, and I planned a new IT research firm. Within a few weeks we got the bad news: Mike was going to join a small security technology company to get back on the merry-go-round. From talking with Mike, I knew he had to join them for all the reasons I mentioned above. I could see it in his face, and in the same position I would have done exactly the same thing. Sure, Securosis is a technology start-up as well, but it’s different. While hopeful Mike would be back in 24 months, I could not know for certain.
If you are a follower of the Securosis blog, you have witnessed the new site launch in early 2009 and seen our project work evolve dramatically. Much of this was part of the original vision. We kept most of our original plans, jettisoned a few, streamlined others, and moved forward. We found some of our ideas just did not work that well, and others required more resources. We have worked continuously to sharpen our vision of who we are and why we are different, but we have a ways to go.
I can say both Rich and I are ecstatic to have Mike formally join the team. It’s not change in my mind, but rather empowerment. Mike brings skills neither of us possesses, and a renewed determination that will help us execute on our initial vision. We will be able to tackle larger projects, cover more technologies, and offer more services. Plus I am looking forward to working with Mike on a daily basis!
This is a pretty big day for us here, and thought it appropriate to share some of the thoughts, planning, and emotions behind this announcement.
Posted at Monday 4th January 2010 3:00 pm
(0) Comments •
I’m incredibly excited to finally announce that as of today, Mike Rothman is joining Securosis. This is a full merger of Security Incite and Securosis, and something I’ve been looking forward to for years.
Back when I started the Securosis blog over 3 years ago I was still an analyst at Gartner and was interested in participating more with the open security community. A year later I decided to leave Gartner and the blog became my employer. I wasn’t certain exactly what I wanted to do, and was restricted a bit due to my non-compete, but I quickly learned that I was able to support myself and my family as an independent voice. Mike was running Incite at the time, and seeing him succeed helped calm some of my fears about jumping out of a stable, enjoyable job. Mike also gave me some crucial advice that was incredibly helpful as I set myself up.
One of my main goals in leaving Gartner was to gain the freedom to both participate more with, and give back to, the security community. Gartner was great, but the nature of its business model prevents analysts from giving away their content to non-clients, and restricts some of their participation in the greater community. It’s also hard to perform certain kinds of primary research, especially longer-term projects. Since I had a non-compete, I sort of needed to give everything away for free anyway.
Things were running well, but I was also limited in how much I could cover or produce on my own. I may have published more written words than any other security analyst out there (between papers and blog posts), but it was still a self-limiting situation. Then about 18 months ago Adrian joined and turned my solo operation into an actual analyst firm. At the same time Mike and I realized we shared a common vision for where we’d like to take the research and analysis game, and started setting up to combine operations. We even had a nifty company name and were working on the nitty-gritty details.
When we had our very first conversation about teaming up, Mike told me there was only one person he’d work for again, but there wasn’t anything on the radar. Then, of course, he got the call right before we wrote up the final paperwork. We both saw this as a delay, not an end, and the time is finally here.
This is exciting to me for multiple reasons. First, we now gain an experienced analyst who has been through the wringer with one of the major firms (Meta), thrived as an independent analyst, and fought it out on the mean streets of vendor-land. There aren’t many great analysts out there – and even fewer with Mike’s drive, productivity, experience, and vision. This also enables us to create the kind of challenging research environment I’ve missed since leaving Gartner. With Mike and our Contributors (David Mortman, David Meier, and Chris Pepper) we now have a team of six highly-opinionated and experienced individuals ready to challenge and push each other in ways simply not possible with only 2-3 people.
Mike also shares my core values. Everything we write is for the end user, no matter the actual target audience. We should always give away as much as possible for free. We should conduct real primary research, as opposed to merely commenting on the world around us. Everything we produce should be pragmatic and help someone get their job done better and faster. Our research should be as objective and unbiased as possible, and we’ll use transparency and our no-BS approach as enforcement mechanisms. Finally, we’re lifers in the security industry – this is a lifestyle business, not a get-rich-quick scheme.
This is also an amazing opportunity to work closely with one of the people I respect most in our industry. Someone I’ve become close friends with since first meeting on the rubber-chicken circuit.
In our updated About section and the Merger FAQ, there’s a lot of talk about all the new things this enables us to do, and the additional value for our supporters and paying clients. But to me the important part is I get to work with someone I like and respect. Someone I know will push me like few others out there. Someone who shares my vision, and is fun to work with.
The only bad part is the commute. It’s going to be a real bi%^& to fly Mike out to Phoenix for Happy Hour every week.
Posted at Monday 4th January 2010 2:59 pm
(4) Comments •
By Rich, David J. Meier, David Mortman
It’s easy to say that every year’s been a big year, but in our case we’ve got the goods to back it up. Aside from doubling the size of the Securosis team, I added a new member to my family and managed to still keep things running. With all our writing and speaking we managed to hit every corner of the industry. We created a new model for patch management, started our Pragmatic series of presentations, popped off a few major whitepapers on application and data security, launched a new design for the site, played a big role in pushing out the 2.0 version of the Cloud Security Alliance Guidance, and… well, a lot of stuff. And I won’t mention certain words I used at the RSA Conference (where we started our annual Disaster Recovery Breakfast), or certain wardrobe failures at Defcon. On the personal front, aside from starting my journey as a father, I met Jimmy Buffett, finally recovered enough from my shoulder surgery to start martial arts again, knocked off a half-marathon and a bunch of 10K races, spent 5 days in Puerto Vallarta with my wife, and installed solar in our home (just in time for a week of cloudy weather).
It’s been a pretty great year.
I’ve never been a fan of predictions, so I thought it might instead be nice to collect some lessons learned from the Securosis team, with a peek at what we’re watching for 2010.
The biggest change for me over the last year has been my transformation from CTO to analyst. I love the breadth of security technologies I get to work with in this role. I see so much more of the industry as a whole and it totally changed my perspective. I have a better appreciation for the challenges end users face, even more than as a CIO, as I see it across multiple companies. This comes at the expense of some enthusiasm, the essence of which is captured in the post Technology vs. Practicality I wrote back in July.
Moving forward, the ‘Cloud’, however you choose to define it, is here. Informally looking at software downloads, security product services and a few other security related activities over the last 30 days, I see ‘s3.amazon.com’ or similar in half the URLs I access. This tidal wave has only just begun. With it, I am seeing a renewed awareness of security by IT admins and developers. I am hearing a collective “Hey, wait a minute, if all my stuff is out there…”, and with it comes all the security questions that should have been posed back when data and servers were all on-premise. This upheaval is going to make 2010 a fun year in security.
2009 for me wasn’t a whole lot different than the past couple of years from a consultative role. Although I probably pushed the hardest I ever have this year to build security in as architecture (not as an afterthought) I still, quite often, found myself in a remediation role. Things are changing – slowly. The enterprise (large and mid-size) is very aware of risk, but seems to still only be motivated in areas where it’s directly tied to monetary penalties (i.e., PCI and the government / defense side). I hope next year brings better balance and foresight in this regard.
As for 2010 I’m going to agree with Adrian in reference to the ‘Cloud’ and its unquestionable impetus. But it will still be an interesting year of pushing the seams of these services to the limits and finding out where they don’t hold water. Mid to late 2009 showed me some examples of cloud services being pulled back in-house and the use case considerably reengineered. 2010 is going to be a good year for an oft quiet topic: secure network architecture – especially with regards to services utilizing the ‘Cloud’. The design and operation of these hybrid networks is going to become more prevalent as network and transport security are continually hammered on for weaknesses. I’m sure it’s safe to say we’ll see a few cloudbursts along the way.
My research moved in a bit of a different direction than I expected this year. Actually, two different directions. Project Quant really changed some of my views on security metrics, and I’m now approaching metrics problems from a different perspective. I’ve come to believe that we need to spend more time on operational security metrics than the management and risk metrics we’ve mostly focused on. Operational metrics are a far more powerful tool to improve our efficiency and effectiveness, and communicate these to non-security professionals. If after decades we’re still struggling with patch management, it seems long past time to focus on the basics and stop chasing whatever is sexy at the moment. I’ve also started paying a lot more attention to the practical implications of cognitive science, psychology, and economics. Understanding why people make the decisions they do, and how these individual decisions play out on a collective scale (economics) are, I believe, the most important factors when designing and implementing security.
I learned that we shouldn’t assume everyone has the basics down, and that if we understand how and why people make the decisions they do, we can design far more effective security. On the side, I also learned a lot about skepticism and logical fallacies, which has heavily influenced how I conduct my research. Our security is a heck of a lot better when it’s mixed with a little science.
In 2010 I plan to focus more on building our industry up. I’d like to become more involved in information-sharing exercises and improving the quality of our metrics, especially those around breaches and fraud. Also, like Hoff and Adam, I’m here if Howard Schmidt and our government call – I’d love to contribute more to our national (and international) cybersecurity efforts if they’re willing to have me. We need to stop complaining and start helping. I’ve been fortunate to have a few opportunities to work with the .gov crowd, and I hope to have more now that we have someone I know and trust in a position of influence.
This year I learned a lot about database security (thanks, Adrian) and more about DLP too (building on what I had previously learned here). I picked up quite a bit about cloud security (thanks, Rich & CSA), but I’m still not sure how much you can really secure keys and data on VMs in someone else’s physical control & possession. So I guess Securosis is serving its purpose – it was founded primarily to educate me, right?
Sadly, it hasn’t been a good year for our federal government. The long-empty cyber-czar post (and the improved but still inadequate job definition) is clearly the responsibility of the Obama administration. So are 2009’s many failures around health-care and banking reform, and the TSA’s ongoing efforts to prevent Americans from travelling and to keep foreigners away – most recently by assaulting Peter Watts and with their magical belief that passengers who don’t move their legs or use their hands are safer than people who are allowed to read and use bathrooms.
This year, I learned a lot about the differences between risk management in theory and risk management in reality. In particular, I came to the conclusion that risk management wasn’t about predicting the future but rather about obtaining a more informed opinion on the present state of being of your organization. I also learned a lot more about my writing style and how to be a better analyst.
In 2010, I plan on continuing to focus on outcomes rather then controls and trying to figure out how to help organizations do so while simultaneously dealing with a controls focused compliance program. Should be interesting to say the least. I’m also looking forward to other companies releasing reports along the lines of what Verizon has done this year and in 2008. In particular, there should be some interesting things happening in January. Can’t wait to get my hands on that data.
We hope you have a great new year, and don’t forget to check back on Monday, January 4th – we have some big announcements, and 2010 is shaping up to be a heck of a year.
—Rich, David J. Meier, David Mortman
Posted at Thursday 31st December 2009 6:24 am
(0) Comments •
By Adrian Lane
I just noticed this story in my feed reader from before Christmas. I don’t know why I found the Computerworld story on the Massachusetts inmate ‘hacker’ so funny, but I do. Perhaps it is because I envision the prosecutor struggling to come up with a punishable crime. In fact I am not totally sure what law Janosko violated. An additional 18 month sentence for ‘abusing’ a computer provided by the correctional facility … I was unaware such a law existed. Does the state now have to report the breach?
In 2006, Janosko managed to circumvent computer controls and use the machine to send e-mail and cull data on more than 1,100 Plymouth County prison employees. He gained access to sensitive information such as their dates of birth, Social Security Numbers, telephone numbers, home addresses and employment records.
That’s pretty good as terminals, especially those without USB or other forms of external storage, can require a lot of manual work to hack. I bet the prosecutors had to think long and hard on how to charge Janosko. I don’t exactly know what ‘abusing’ a computer means, unless of course you do something like the scene from Office Space when they exact some revenge on a printer. He pleaded guilty to “one count of damaging a protected computer”, but I am not sure how they quantified damages here as it seems improbable a dumb terminal or the associated server could be damaged by bypassing the application interface. Worst case you reboot the server. Maybe this is some form of “unintended use”, or the computer equivalent to ripping off mattress tags. If I was in his shoes, I would have claimed it was ‘research’!
Posted at Wednesday 30th December 2009 6:06 pm
(0) Comments •
Fall of 2009 marks the 20th anniversary of the start of my professional security career. That was the first day someone stuck a yellow shirt on my back and sent me into a crowd of drunk college football fans at the University of Colorado (later famous for its student riots). I’m pretty sure someone screwed up, since it was my first day on the job and I was assigned a rover position – which normally goes to someone who knows what the f&%$ they are doing, not some 18 year old, 135-lb kid right out of high school. And yes, I was breaking up fights on my first day (the stadium wasn’t dry until a few years later).
If you asked me then, I never would have guessed I’d spend the next couple decades working through the security ranks, eventually letting my teenage geek/hacker side take over. Over that time I’ve come to rely on the following guiding principles in everything from designing my personal security to giving advice to clients:
- Don’t expect human behavior to change. Ever.
- You cannot survive with defense alone.
- Not all threats are equal, and all checklists are wrong.
- You cannot eliminate all vulnerabilities.
- You will be breached.
There’s a positive side to each of these negative principles:
- Design security controls that account for human behavior. Study cognitive science and practical psychology to support your decisions. This is also critical for gaining support for security initiatives, not just design of individual controls.
- Engage in intelligence and counter-threat operations to the best of your ability. Once an attack has started, your first line of security has already failed.
- Use checklists to remember the simple stuff, but any real security must be designed using a risk-based approach. As a corollary, you can’t implement risk-based security if you don’t really understand the risks; and most people don’t understand the risks. Be the expert.
- Adopt anti-exploitation wherever possible. Vulnerability-driven security is always behind the threat.
- React faster and better. Incident response is more important than any other single security control.
With one final piece of advice – keep it simple and pragmatic.
And after 20 years, that’s all I’ve got…
Posted at Wednesday 30th December 2009 4:38 pm
(7) Comments •
By Adrian Lane
An interesting discussion popped up on Slashdot this Saturday afternoon about Preventing My Hosting Provider From Rooting My Server. ‘hacker’ is claiming that when he accuses his hosting provider of service interruption, they assume root access on his machines without permission.
“I have a heavily-hit public server (web, mail, cvs/svn/git, dns, etc.) that runs a few dozen OSS project websites, as well as my own personal sites (gallery, blog, etc.). From time to time, the server has ‘unexpected’ outages, which I’ve determined to be the result of hardware, network and other issues on behalf of the provider. I run a lot of monitoring and logging on the server-side, so I see and graph every single bit and byte in and out of the server and applications, so I know it’s not the OS itself. When I file ‘WTF?’-style support tickets to the provider through their web-based ticketing system, I often get the response of: ‘Please provide us with the root password to your server so we can analyze your logs for the cause of the outage.’ Moments ago, there were three simultaneous outages while I was logged into the server working on some projects. Server-side, everything was fine. They asked me for the root password, which I flatly denied (as I always do), and then they rooted the server anyway, bringing it down and poking around through my logs. This is at least the third time they’ve done this without my approval or consent. Is it possible to create a minimal Linux boot that will allow me to reboot the server remotely, come back up with basic networking and ssh, and then from there, allow me to log in and mount the other application and data partitions under dm-crypt/loop-aes and friends?”
Ignoring for a moment the basic problem of requesting assistance while not wishing to provide access, how do you protect the servers from remote hosting administrators? If someone else has physical access to your machine, even if you machine is virtual, a skilled attacker will gain access to your data regardless. It’s not clear if the physical machine is owned by ‘hacker’ or if it is just leased server capacity, but it seems to me that if you want to keep remote administrators of average skill from rooting your server and then rummaging around in your files, disk encryption would be an effective choice. You have the issue of needing to supply credentials remotely upon reboot, but this would be effective in protecting log data. If you need better security, place the server under your physical control, or all bets are off.
Posted at Sunday 27th December 2009 3:00 am
(6) Comments •
By Adrian Lane
This is going to be a pretty short summary. If you noticed, we were were a little light on content this week, due to out-of-town travel for client engagements and in-town client meetings. On a personal note, early this week I had a front tire blow out on my car, throwing me airborne and backwards across four lanes of traffic during the afternoon commute. A driver who witnessed the spectacle said it looked like pole vaulting with cars, and could not figure out how I landed on the wheels, backwards or not. Somehow I did not hit anything and walked away unscathed, but truth be told, I am a little shaken up by the experience. Thank you to those of you who sent well wishes, but everything is fine here.
On a more positive note we are gearing up for several exciting events in the new year. New business offerings, a bunch of new stuff on Quant for databases, and a few other surprises as well. But all of this is a lot of work, and it is all going on while we are attending to family matters, so we have decided that this is the last Friday summary of the year. We will have more posts during the holidays, but the frequency will be down until the new year.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Project Quant for Databases:
Favorite Outside Posts
Top News and Posts
Honestly, most of us did not even open our feed readers this week. But one post was making the rounds:
Blog Comment of the Week
This week’s best comment comes from our own Jeremiah Grossman in response to Adrian’s post on Akamai Implements WAF:
Adrian, good post, some bits to consider…
One major reason I found this announcement very important is many large website operators who utilize massive bandwidth simply cannot deploy WAFs for performance/manageability reasons. This is why WAFs are rarely found guarding major traffic points. Akamai is known specifically for their performance capabilities so may be able to scale up WAFs where current industry has not.
Secondly, WAF rules will always leave some vulnerability gaps, hopefully lesser so in the future, but complete coverage isn’t necessarily a must. The vast majority of vulnerabilities (by raw numbers) are syntax in nature (ie SQLi, XSS, etc.) By mitigating these (at least temporarily) organizations may prioritize the business logic flaws for code fixes–gaps in the WAF. These approach helps getting down to zero remotely exploitable bugs MUCH easier. We’ve experienced as much in our customer-base.
“Rule sets are really hard to get right, and must be updated with the same frequency as your web site content. As you add new pages or functions, you are adding and updating rules.”
This implies the WAF is deployed in white list mode, which to my understanding is not how Akamai is going to go. ModSecurity Core Rules are black list style, so would not require updates when content is changed. To be fair the rules would have to be changed as the attacks evolve, which may or may be as fast as website/content code changes.
Posted at Friday 18th December 2009 6:31 am
(0) Comments •
By Adrian Lane
This is my MacBook sale progress report. For those of you who have not followed my tweets on the subject, I listed my MacBook for sale on Craigslist. After Bruce Schneier’s eye-opening and yet somehow humorous report on selling his laptop on eBay, I figured I would shoot for a face to face sale. I chose Craigslist in Phoenix and specified a cash-only sale. The results have been less than impressive. The first time I listed the laptop:
- Scammers: 6
- Phishers: 2
- Tire Kickers: 1
- Real Buyers: 0
The second time I listed the laptop:
- Scammers: 5
- Phishers: 4
- Pranksters: 1
- Tire Kickers: 1
- Real Buyers: 0
I consider them scammers, as the people who responded in all but one case wanted shipment to Africa. It was remarkably consistent. The remaining ‘buyer’ claimed to be in San Jose, but felt compelled to share some sob story about a relative with failing health in Africa. I figured that was a precursor to asking me to ship overseas. When I said I would be happy to deliver to their doorstep for cash, they never responded. The prankster wanted me to meet him in a very public place and assured me he would bring cash, but was just trying to get me to drive 30 miles away. I asked a half dozen times for a phone call to confirm, which stopped communications cold. I figure this is kind of like crank calling for the 21st century.
A few years ago I saw a presentation by eBay’s CISO, Dave Cullinane. He stated that on any given day, 10% of eBay users would take advantage of another eBay user if the opportunity presented itself, and about 2% were actively engaged in finding ways to defraud other eBay members. Given the vast number of global users eBay has, I think that is a pretty good sample size, and probably an accurate representation of human behavior. I would bet that when it comes to high dollar items that can be quickly exchanged for cash, the percentage of incidents rises dramatically. In my results, 55% of responses were active scams. I would love to know what percentages eBay sees with laptop sales. Is it the malicious 2% screwing around with over 50% of the laptop sales? I am making an assumption that it’s a small group of people engaged in this behavior, given the consistency of the pitches, and that my numbers on Craigslist are not that dissimilar from eBay’s.
A small group of people can totally screw up an entire market, as the people I speak with are now donating stuff for the tax writeoff rather than deal with the detritus. Granted, it is easier for an individual to screen for fraudsters with Craigslist, but eBay seems to do a pretty good job. Regardless, at some point the hassle simply outweighs the couple hundred bucks you’d get from the sale. Safe shopping and happy holidays!
Posted at Tuesday 15th December 2009 11:08 pm
(1) Comments •
By Adrian Lane
Akamai announced that they are adding Web Application Firewall (WAF) capabilities into their distributed EdgePlatform netwok. I usually quote from the articles I reference, but there is simply too much posturing and fluffy marketing-ese about value propositions for me to extract an insightful fragment of information on what they are doing and why it is important, so I will paraphrase. In a nutshell they have ported ModSecurity onto/into the Akamai Edge Server. They are using the Core Rule Set to form the basis of their policy set. As content is pulled from the Akamai cache servers, the request is examined for XSS, SQL Injection, response splitting, and other injection attacks, as well as some error conditions indicative of tampering.
Do I think this is a huge advancement to security? Not really. At least not at the outset. But I think it’s a good idea in the long run. Akamai edge servers are widely used by large commercial vendors and content providers, who are principal targets for many specific XSS attacks. In essence you are distributing Web Application Firewall rules, and enforcing as requests are made for the distributed/cached content. The ModSecurity policy set has been around for a long time and will provide basic protections, but it leaves quite a gap in meaningful coverage. Don’t get me wrong, the rule set covers many of the common attacks and they are proven to be effective. However, the value of a WAF is in the quality of the rule set, and how appropriate those rules are to the specific web application. Rule sets are really hard to get right, and must be updated with the same frequency as your web site content. As you add new pages or functions, you are adding and updating rules.
I think the announcement is important, though, is because I believe it marks the beginning of a trend. We hear far too many complaints about WAF hindering applications, as well as the expense of rule set development and maintenance. The capability is valuable, but the coverage needs to get better, management needs to be easier, and the costs need to come down. I believe this is a model we will see more of because:
- Security is embedded into the service. With many ‘Cloud’ and SaaS offerings being offered, most with nebulous benefits, it’s clear that those who use Akamai are covered from the basic attacks, and the analysis is done on the Akamai network, so your servers remain largely unburdened. Just as with out-sourcing the processing overhead associated with anti-spam into the cloud, you are letting the cloud absorb the overhead of SQL Injection detection. And like Anti-virus, it’s only going to catch a subset of the attacks.
- Commoditization of WAF service. Let’s face it, SaaS and cloud models are more efficient because you commoditize a resource and then leverage the capability across a much larger number of customers. WAF rules are hard to set up, so if I can leverage attack knowledge across hundreds or thousands of sites, the cost goes down. We are not quite there yet, but the possibility of relieving your organization from needing these skills in-house is very attractive for the SME segment. The SME segment is not really using Akamai EdgeServers, so what I am talking about is generic WAF in the cloud, but the model fits really well with outsourced and managed service models. Specific, tailored WAF rules will be the add-on service for those who choose not to build defenses into the web application or maintain their own WAF.
- The knowledge that Akamai can gather and return to WAF & web security vendors provides invaluable analysis on emerging attacks. The statistics, trend data, and metrics they have access to offer security researchers a wealth of information – which can be leveraged to thwart specific attacks and augment firewall rules.
So this first baby step is not all that exciting, but I think it’s a logical progression for WAF service in the cloud, and one we will see a lot more of.
Posted at Tuesday 15th December 2009 8:46 pm
(10) Comments •