Securosis

Research

Online Fraud Report: What Would You Want To See?

So a buddy of mine back from when I was on the customer side contacted me recently. He’s at a new company doing some very interesting work on detecting certain classes of online fraud and amounts of malware on websites. So far he’s gathered some fascinating data on just how bad the problem is, and I’m trying to convince him that he should start publishing some of his aggregate data in a quarterly or semi-annual report. He is very interested but would love some community input on what the report should look like, which brings me to you. Some of what should be in such a report is obvious – such as rate of detected fraud overall and by various industry verticals. The rest isn’t so clear and that’s where you all come in. Put on whatever hat you like – CISO, CFO, security researcher, risk officer, consumer, or whatever else – and what would you like to see in such a report? Are there things you hate about other reports? Or are there things you wish they covered which they never do? Throw out your requests, rants, comments, ideas, and questions in the comments and I’ll collect them all together and summarize them in a future post. If this really takes off, I’ll move it over to the forums. Share:

Share:
Read Post

Visa’s Data Field Encryption

I was reading Martin McKeay’s blog this morning and saw his reference to Visa’s Data Field Encryption white paper. Martin’s point that Visa is the author, rather than the PCI council, is a good one. Now that I’ve read the paper, I don’t think Visa is putting it out as a sort of litmus test on behalf of the council, but instead Visa is taking a stand on what technologies they want endorsed. And if that is the case, Rich’s feeling prediction that “Tokenization Will Become the Dominant Payment Transaction Architecture” will happen far faster than we anticipated. A couple observations about the paper: … data field encryption renders cardholder data useless to criminals in the event of a merchant data breach decryption. Use robust key management solutions… and Visa has developed best practices to assist merchants in evaluating the new encryption… Use an alternate account or transaction identifier for business processes that requires[sic] the primary account number… The recommendations could describe tokenization or format preserving encryption, but it looks to me like they have tokenization in mind. And by tokenization I mean the PAN and other sensitive data are fully encrypted at the POS, and their response to the merchant is a token. I like the fact that their goals do not dictate technology choices, and are worded in such a way that they should not be obsolete within a year. But the document appears to have been rushed to publication. For example, goal #4: protect the cryptographic operations within devices from physical or logical compromises. It’s the cryptographic operations you want to protect; the device should be considered expendable and not sensitive to compromise. Similarly, goal #1 states: Limit cleartext availability of cardholder data and sensitive authentication data to the point of encryption and the point of decryption. But where is the “point of encryption”? It’s one thing to be at the POS terminal, but if this is a web transaction environment, does that mean it happens at the web server? At the browser level? Somewhere else? Reading the document it seems clear that the focus is on POS and not web based transactional security, which looks likes a big mistake to me. Martin already pointed out that the authors lumped encryption and hashing into a single domain, but that may have been deliberate, to make the document easier to read. But if clarity was the goal, who thought “Data Field Encryption” was a good term? It does not describe what is being proected. And with tokenization, encryption is only part of the picture. If you are a web application or database developer, you will see why this phrase is really inappropriate. Make no mistake – Visa has put a stake in the ground and it will be interesting to see how the PCI Council reacts. Share:

Share:
Read Post

Database Audit Events

I have attended a lot of database developer events and DBA forums around the country in the last 6 years. One benefit of attending lectures by database administrators for database administrators is the wealth of information on tools, tricks, and tips for managing databases. And not just the simple administrative tasks, but clever ways to accomplish more complex tasks. A lot of these tricks never seem to make it into the mainstream, instead remaining part of the DBA’s exclusive repertoire. I wish I had kept better notes. And unfortunately I am not going to Oracle Open World, but I wanted to for this very reason. As part of a presentation I worked on a number for years ago at one of these events, I provided an overview of the common elements in the audit logs. I wanted to show how to comb through logs to find events of interest. I have placed a catalog of audit events for several relational database platforms into the Database Security section of our research library. For those of you interested in “roll your own” database auditing, it may be useful. I have listed out the audit-able events for Sybase, Oracle, SQL Server, and DB2. I had a small shell script that would grab the events I was interested in from the audit trail, place them into a separate file, and then clean up the reviewed audit logs or event monitor resource space. What you choose to do with the data will vary. As part of my latest submission to Dark Reading, I referred to the essential audit-able events most commonly required for regulatory and security efforts. These files list out the specifics for each of those suggestions. If anyone in the community would like to contribute similar information for MySQL or even Postgres, I will add those into the library as well. Share:

Share:
Read Post

Friday Summary- October 2, 2009

I hate to admit it, but I have a bad habit of dropping administrative tasks or business development to focus on the research. It’s kind of like programmer days – I loved coding, but hated debugging or documentation. But eventually I realize I haven’t invoiced for a quarter, or forgot to tell prospects we have stuff they can pay for. Those are the nights I don’t sleep very well. Thus I’ve spent a fair bit of time this week catching up on things. I still have more invoices to push out, and spent a lot of time editing materials for our next papers, and my contributions to the next version of the Cloud Security Alliance Guidance report. I even updated our retainer programs for users, vendors, and investors. Not that I’ve sent it to anyone – I sort of hate getting intrusive sales calls, so I assume I’m annoying someone if I mention they can pay me for stuff. Probably not the best trait for an entrepreneur. Thus I’m looking forward to a little downtime next week as my wife and I head off for vacation. It starts tonight at a black tie charity event at the Phoenix Zoo (first time I’ll be in a penguin suit in something like 10 years). Then, on Monday, we head to Puerto Vallarta for a 5 day vacation we won in a raffle at… the Phoenix Zoo. It’s our first time away from the baby since we had her, so odds are instead of hanging out at the beach or diving we’ll be sleeping about 20 hours a day. We’ll see how that goes. And with that, on to the Friday Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian starts a new series on database security over at Dark Reading with a post on SQL Injection. Rich and Martin on the Network Security Podcast, Episode 168. Favorite Securosis Posts Rich: Our intern kicks off his analyst career with a post on “realistic security”. David Meier: IDM: It’s A Process David Mortman and Adrian: Rich’s post on tokenization. And honestly, we did not place that strawman in the audience. Other Securosis Posts SQL Injection Prevention Digital Ant Swarms Database Encryption Benchmarking Favorite Outside Posts Adrian: On the Mozilla Security Blog: A Glimpse Into the Future of Browser Security. Cutting edge? I dunno, but interesting. Rich: Jack Daniel on the Massachusetts privacy law mess. This is why I never get excited about a coming law until it’s been passed, there’s an enforcement mechanism, and it’s being enforced. Meier: Wireless Network Modded to See Through Walls – This brings a whole new level of fun to the Arduino platform. Mortman: Not about Security, but come on, homemade ketchup! Top News and Posts Slashdot links to a bunch of articles on the rise of cybercrime against business banking accounts (usually by compromising the company’s computer, and grabbing their online username/password). Much of the investigative reporting is being done by Brian Krebs at the Washington Post. Competing statistics on phishing. Odds are they’re all wrong, but it’s fun to watch. Judges orders deactivation of a Gmail account after a bank accidentally sends it confidential information. Yet another judge shows a complete lack of understanding of technology. Brian Krebs (again) with the story of how a money mule was recruited. I don’t understand how this person could possibly believe it was legitimate work. Microsoft releases their free Security Essentials antivirus. New malware rewrites bank statements on the fly. This is pretty creative. BreakingPoint on Cisco being a weak link in national infrastructure security. Researchers break secure data storage system. Absolutely no one is surprised. Using BeEF for client exploitation via XSS. New NIST guidance on smart grid security. Wi-Fi Security Paint. But it just doesn’t have the cachet of aluminum foil. Payroll Firm Breached Does it really matter if we call it Enterprise UTM or UTM or Bunch-O-Security-Stuff in a Box? Seriously, cross $200M per year in revenue, and does anyone care? WTF? Bloggers Cause Wisconsin Tourism Federation to Change Name. (Just because it’s my home state –Meier). Blog Comment of the Week This week’s best comment comes from Slavik in response to SQL Injection Prevention: Hi Adrian, good stuff. I just wanted to point out that the fact that you use stored procedures (or packages) is not in itself a protection against SQL injection. It’s enough to briefly glance at the many examples on milw0rm to see how even Oracle with their supplied built-in packages can make mistakes and be vulnerable to SQL injections that will allow an attacker to completely control the database. I agree that if you use only static queries then you’re safe inside the procedure but it does not make your web application safe (especially with databases that support multiple commands in the same call like SQL server batches). Of course, if you use dynamic queries, it’s even worse. Unfortunately, there are times when dynamic queries are necessary and it makes the code very difficult to write securely. The most important advice regarding SQL injection I would give developers is to use bind variables (parametrized queries) in their applications. There are many frameworks out there that encourage such usage and developers should utilize them. Share:

Share:
Read Post

SQL Injection Prevention

The team over at Dark Reading was kind enough to invite me to blog on their Database Security portal. This week I started a mini-series on threat detection and prevention by leveraging native database features. This week’s post is on using stored procedures to combat SQL injection attacks. But those posts are fairly short and written for a different audience. Here, I will be cross-posting additional points and advanced content I left out of those articles. My goal was to demystify how stored procedures can help combat SQL injection. There are other options to detect and block SQL injection attacks, many of which have been in use with limited success for some time now. What can you do about SQL injection? You can patch your database to block known threats. You can buy firewalls to try to intercept these rogue statements, but the application and general network firewalls have shown only limited effectiveness. You need to have a very clear signature for the threat, as well as a written a policy that does not break your application. Many Database Activity Monitoring vendors can block queries before they arrive. Early DAM versions detected SQL injection based on exact pattern matching that was easy for attackers to avoid, back when DAM policy management could not accommodate business policy issues; this resulted in too many false negatives, too many false positives, and deadlocked applications. These platforms are now much better at policy management and enforcement. There are memory scanners to examine statement execution and parameters, as well as lexical and content analyzers to detect and block (with fair success). Some employ a hybrid approach, with assessment to detect known vulnerabilities, and database/application monitoring to provide ‘virtual patching’ as a complement. I have witnessed many presentations at conferences during the last two years demonstrating how a SQL injection attack works. Many vendors have also posted examples on their web sites and show how easy it is to compromise and unsecured database with SQL injection. At the end of the session, “how to fix” is left dangling. “Buy our product and we will fix this problem for you” is often their implication. That may be true or false, but you do not necessarily need a product to do this, and a bolt-on product is not always the best way. Most are reactive and not 100% effective. As an application developer and database designer, I always took SQL injection attacks personally. The only reason the SQL injection attack succeeded was a flaw in my code, and probably a bad one. The applications I produced in the late 90s and early 2000s were immune to this form of attack (unless someone snuck an ad-hoc query into the code somewhere without validating the inputs) because of stored procedures. Some of you might say note this was really before SQL injection was fashionable, but as part of my testing efforts, I adopted early forms of fuzzing scripts to do range testing and try everything possible to get the stored procedures to crash. Binary inputs and obtuse ‘where’ clauses were two such variations. I used to write a lot of code in stored procedures and packages. And I used to curse and swear a lot as packages (Oracle’s version, anyway) are syntactically challenging. Demanding. Downright rigorous in enforcing data type requirements, making it very difficult to transition data to and from Java applications. But it was worth it. Stored procedures are incredibly effective at stopping SQL injection, but they can be a pain in the ass for more complex objects. But from the programmer and DBA perspectives, they are incredibly effective for controlling the behavior of queries in your database. And if you have ever had a junior programmer put a three-table cartesian product select statement into a production database, you understand why having only certified queries stored in your database as part of quality control is a very good thing (you don’t need a botnet to DDoS a database, just an exuberant young programmer writing the query to end all queries). And don’t get me started on the performance gains stored procedures offer, or this would be a five-page post … If you like waiting around for your next SQL injection 0-day patch, keep doing what you have been doing. Share:

Share:
Read Post

Tokenization Will Become the Dominant Payment Transaction Architecture

I realize I might be dating myself a bit, but to this day I still miss the short-lived video arcade culture of the 1980’s. Aside from the excitement of playing on “big hardware” that far exceeded my Atari 2600 or C64 back home (still less powerful than the watch on my wrist today), I enjoyed the culture of lining up my quarters or piling around someone hitting some ridiculous level of Tempest. One thing I didn’t really like was the whole “token” thing. Rather than playing with quarters, some arcades (pioneered by the likes of that other Big Mouse) issued tokens that would only work on their machines. On the upside you would occasionally get 5 tokens for a dollar, but overall it was frustrating as a kid. Years later I realized that tokens were a parental security control – worthless for anything other than playing games in that exact location, they keep the little ones from buying gobs of candy 2 heartbeats after a pile of quarters hits their hands. With the increasing focus on payment transaction security due to the quantum-entangled forces of breaches and PCI, we are seeing a revitalization of tokenization as a security control. I believe it will become the dominant credit card transaction processing architecture until we finally dump our current plain-text, PAN-based system. I first encountered the idea a few years ago while talking with a top-tier retailer about database encryption. Rather than trying to encrypt all credit card data in all their databases, they were exploring the possibility of concentrating the numbers in one master database, and then replacing the card numbers with “tokens” in all the other systems. The master database would be highly hardened and encrypted, and keep track of which token matched which credit card. Other systems would send the tokens to the master system for processing, which would then interface with the external transaction processing systems. By swapping out all the card numbers, they could focus most of their security efforts on one controlled system that’s easier to control. Sure, someone might be able to hack the application logic of some server and kick off an illicit payment, but they’d have to crack the hardened master server to get card numbers for any widespread fraud. We’ve written about it a little bit in other posts, and I have often recommended it directly to users, but I probably screwed up by not pushing the concept on a wider basis. Tokenization solves far more problems than trying to encrypt in place, and while complex it is still generally easier to implement than alternatives. Well-designed tokens fit the structure of credit card numbers, which may require fewer application changes in distributed systems. The assessment scope for PCI is reduced, since card numbers are only in one location, which can reduce associated costs. From a security standpoint, it allows you to focus more effort on one hardened location. Tokenization also reduces data spillage, since there are far fewer locations which use card numbers, and fewer business units that need them for legitimate functions, such as processing refunds (one of the main reasons to store card numbers in retail environments). Today alone we were briefed on two different commercial tokenization offerings – one from RSA and First Data Corp, the other from Voltage. The RSA/FDC product is a partnership where RSA provides the encryption/tokenization tech FDC uses in their processing service, while Voltage offers tokenization as an option to their Format Preserving Encryption technology. (Voltage is also partnering with Heartland Payment Systems on the processing side, but that deal uses their encryption offering rather than tokenization). There are some extremely interesting things you can do with tokenization. For example, with the RSA/FDC offering, the card number is encrypted on collection at the point of sale terminal with the public key of the tokenization service, then sent to the tokenization server which returns a token that still “resembles” a card number (it passes the LUHN check and might even include the same last 4 digits – the rest is random). The real card number is stored in a highly secured database up at the processor (FDC). The token is the stored value on the merchant site, and since it’s paired with the real number on the processor side, can still be used for refunds and such. This particular implementation always requires the original card for new purchases, but only the token for anything else. Thus the real card number is never stored in the clear (or even encrypted) on the merchant side. There’s really nothing to steal, which eliminates any possibility of a card number breach (according to the Data Breach Triangle). The processor (FDC) is still at risk, so they will need to use a different set of technologies to lock down and encrypt the plain text numbers. The numbers still look like real card numbers, reducing any retrofitting requirements for existing applications and databases, but they’re useless for most forms of fraud. This implementation won’t work for recurring payments and such, which they’ll handle differently. Over the past year or so I’ve become a firm believer that tokenization is the future of transaction processing – at least until the card companies get their stuff together and design a stronger system. Encryption is only a stop-gap in most organizations, and once you hit the point where you have to start making application changes anyway, go with tokenization. Even payment processors should be able to expand use of tokenization, relying on encryption to cover the (few) tokenization databases which still need the PAN. Messing with your transaction systems, especially legacy databases and applications, is never easy. But once you have to crack them open, it’s hard to find a downside to tokenization. Share:

Share:
Read Post

Realistic Security

Finally, it’s here: my first post! Although I doubt anyone has been holding their breath, I have had a much harder than anticipated time trying to nail down my first topic. This is probably due in part to the much larger and more focused audience at Securosis than I have ever written for in the past. That said, I’d like to thank Rich and Adrian for supporting me in this particular role and I hope to bring a different perspective to Securosis with increased frequency as I move forward. Last week provided a situation that brought out a heated discussion with a colleague (I have a bad habit of forgetting that not everyone enjoys heated debate as much as I do). Actually, the argument only heated up when he mentioned that vulnerability scanning and penetration testing aren’t required to validate a security program. At this point I was thoroughly confused because when I asked how he could measure the effectiveness of such a security program without those tools, he didn’t have a response. Another bad habit: I prefer debating with someone who actually justifies their positions. My position is that if you can’t measure or test the effectiveness of your security, you can’t possibly have a functioning security program. For example, let’s briefly use the Securosis “Building a Web Application Security Program” white paper as a reference. If I take the lifecycle outline (now please turn your PDFs to page 11, class) there’s no possible way I can fulfill the Secure Deployment step without using VA and pen testing to validate our security controls are effective. Similarly, consider the current version of PCI DSS without any pen testing – again you fail in multiple requirement areas. This is the point at which I start formulating a clearer perspective on why we see security failing so frequently in certain organizations. I believe one of the major reasons we still see this disconnect is that many people have confused compliance, frameworks, and checklists with what’s needed to keep their organizations secure. As a consultant, I see it all the time in my professional engagements. It’s like taking the first draft blueprints for a car, building said car, and assuming everything will work without any engineering, functional, or other tests. What’s interesting is that our compliance requirements are evolving to reflect, and close, this disconnect. Here’s my thought: year over year compliance is becoming more challenging from a technical perspective. The days of paper-only compliance are now dead. Those who have already been slapped in the face with high visibility breach incidents can probably attest (but never will) that policy said one thing and reality said another. After all they were compliant – it can’t be their fault that they’ve been breached after they complied with the letter of the rules. Let’s make a clear distinction between how security is viewed from a high level that makes sense (well, at least to me) by defining “paper security” versus “realistic security”. From the perspective of the colleague I was talking with, he believed that all controls and processes on paper would somehow magically roll over into the digital boundaries of infrastructure as he defined them. The problem is: how can anyone write those measures if there isn’t any inherent technology mapping during development of the policies? Likewise how can anyone validate a measure’s existence and future validity without some level of testing? This is exactly the opposite of my definition of realistic security. Realistic security can only be created by mapping technology controls and policies together within the security program, and that’s why we see both the technical and testing requirements growing in the various regulations. To prove the point that technical requirements in compliance are only getting more well defined, I did some quick spot checking between DSS 1.1 and 1.2.1. Take a quick look at a few of the technically specific things expanded in 1.2.1: 1.3.6 states: ‘…run a port scanner on all TCP ports with “syn reset” or “syn ack” bits set’ – new as of 1.2. 6.5.10 states: “Failure to restrict URL access (Consistently enforce access control in presentation layer and business logic for all URLs.)” – new as of 1.2. 11.1.b states: “If a wireless IDS/IPS is implemented, verify the configuration will generate alerts to personnel” – new as of 1.2. Anyone can see the changes between 1.1 and 1.2.1 are relatively minor. But think about how, as compliance matures, both its scope and specificity increase. This is why it seems obvious that technical requirements, as well as direct mappings to frameworks and models for security development, will continue to be added and expanded in future revisions of compliance regulations. This, my friends, is on the track of what “realistic security” is to me. It can succinctly be defined as a never ending Test Driven Development (TDD) methodology applied to a security posture: if it is written in your policy then you should be able to test and verify it; and if you can’t, don’t, or fail during testing, then you need to address it. Rinse, wash, and repeat. Can you honestly say those reams of printed policy are what you have in place today? C’mon – get real(istic). Share:

Share:
Read Post

Digital Ant Swarms

A friend of mine emailed yesterday, admonishing me for not writing about the Digital Ants concept discussed on Dailytech. I think it’s because he wanted me to call B.S. on the story. It seems that some security researchers are trying to mimic the behavior of ants in computer defenses to thwart attackers. From the article: Security researchers found inspiration in the common ant. Describes Wake Forest University Professor of Computer Science Errin Fulp, “In nature, we know that ants defend against threats very successfully. They can ramp up their defense rapidly, and then resume routine behavior quickly after an intruder has been stopped. We were trying to achieve that same framework in a computer system.” WFU created digital “ants” – utilities that migrate from computer to computer over networks searching for threats. When one locates a threat, others congregate on it, using so-called “swarm intelligence”. The approach allows human researchers to quickly identify and quarantine dangerous files by watching the activity of the ants. This seems like nature’s reaction du jour. Many have written about the use of ‘helpful viruses’ and viral technologies (cheese worm (PDF), anti-porn worm, wifi worm, etc.) to combat hostile computer worms and viruses. Helpful virus code finds exploits the same way a harmful virus would, but then patches the defect – curing the system instead of reproducing. But the helpful viruses tend to become an attack vector of themselves, or ‘fix’ things in very unintended ways, compounding the problem. Ants behave very differently than viruses. Real ants fill a dual role, both gathering food and defending the hive. Besides access controls, few security products can make this claim. Second, ants can detect threats. Software and systems are only marginally effective at this, even with different pieces operating (hopefully) as a coordinated unit. Finally, ants bite. They have the ability to defend themselves individually, as well as work effectively as a group. In either case they post a strong deterrent to attack, something seldom seen in the digital domain. Conceptually I like the idea of being able to systemically respond to a threat, with different parts of the system reacting to different threats. On the threat detection side this makes sense as well, as many subtle attacks require information gathered from different parts of the system to be able to identify them. SEM/SIEM has slowly been advancing this science for some time now, and it is a core piece of the ADMP concept for web application security, where the detection and prevention is systemic. It is not the idea of a swam that makes it effective, but holistic detection in combination with multiple, different reactions by systems that can provide a meaningful response. So I am not saying ant swarming behavior applied to computer security is B.S., but “ramping up responses” is not the real problem – detection and appropriate reactions are. Share:

Share:
Read Post

IDM: It’s A Process

IDM fascinates me, if only because it is such an important base for a good security program. Despite this, many organizations (even ones with cutting edge technology) haven’t really focused on solving the issues around managing users’ identity. This is, no doubt, in part due to the fact that IDM is hard in the real world. Businesses can have hundreds if not thousands of applications (GM purportedly had over 15,000 apps at one point) and each application itself can have hundreds or thousands of roles within it. Combine this with multiple methods of authentication and authorization, and you have a major problem on your hands which makes digging into the morass challenging to say the least. I also suspect IDM gets ignored because it does not warrant playing with fun toys, so as a result, doesn’t get appropriate attention from the technophiles. Don’t get me wrong – there are some great technologies out there to help solve the problem, but no matter what tools you have at your disposal, IDM is fundamentally not a technology problem but a process issue. I cannot possibly emphasize this enough. In the industry we love to say that security is about People, Process, and Technology. Well, IDM is pretty much all about Process, with People and Technology supporting it. Process is an area that many security folks have trouble with, perhaps due to lack of experience. This is why I generally recommend that security be part of designing the IDM processes, policies, and procedures – but that the actual day to day stuff be handled by the IT operations teams who have the experience and discipline to make it work properly. DS had a great comment on my last post, which is well worth reading in its entirety, but today there is one part I’d like to highlight because it nicely shows the general process that should be followed regardless of organization size: While certainly not exhaustive, the above simple facts can help build a closed loop process. When someone changes roles, IT gets notified how. A request is placed by a manager or employee to gain access to a system. If employee request, manager must(?) approve. If approved as “in job scope” by manager, system owner approves. IT (or system owner in decentralized case) provisions necessary access. Requester is notified. Five steps, not terribly complicated and easy to do, and essentially what happens when someone gets hired. For termination, all you really need are steps 1, 2, and 5 – but in reverse. This process can even work in large decentralized organizations, provided you can figure out (a) the notification/request process for access changes and (b) a work flow process for driving through the above cycle. (a) is where the Info Sec team has to get outside the IT department and talk to the business. This is huge. I’ve talked in the past about the need for IT to understand the business and IDM is a great example of why. This isn’t directly about business goals or profit/loss margins, but rather about understanding how the business operates on a day to day basis. Don’t assume that IT knows what applications are being used – in many organizations IT only provides the servers and sometimes only the servers for the basic infrastructure. So sit down with the various business units and find out what applications/services are being used and what process they are using today to provision users, who is handling that process, and what changes if any they’d like to see to the process. This is an opportunity to figure out which applications/services need to be part of your IDM initiative (this could be compliance, audit, corporate mandate etc.) and which ones currently aren’t relevant. It has the added benefit of discovering where data is flowing, which is key to not only compliance mandates under HIPAA, SOX, and the European Data Directive (to name a few), but also incredibly handy when electronic discovery is necessary. One all this data has been gathered, you can evaluate the various technologies available and see if they can help. This could be anything from a web app to manage change requests, to workflow (see below), to a full-scale automated access provisioning and de-provisioning system, driven by the approval process. Once you’ve solved (a), (b) is comparatively straightforward and another place where technology can make life easier. The best part is that your organization likely has something like this deployed for other reasons, so the additional costs should be relatively low. Once your company/department/university/etc. grows to a decent size and/or starts to decentralize, manually following the process will become more and more cumbersome, especially as the number of supported applications goes up. A high rate of job role changes within the organization has a similar effect. So some sort of software that automatically notifies employees when they have tasks will greatly streamline the process and help people get the access they need much more quickly. Workflow software is also a great source of performance metrics and can help provide the necessary logs when dealing with audit or compliance issues. As I mentioned above, the business reality for many organizations is far from pristine or clear, so in my next post I’ll explore more those issues in more depth. For now, suffice it to say that until you address those issues, the above process will work best with a small company with fewer apps/auth methods. If you are involved in a larger more complex organization, all is not lost. In that case, I highly recommend that you not try to fix things all at once, but start with one a group or sub-group within the organization and roll out there first. Once you’ve worked out the kinks, you can roll in more and more groups over time. Share:

Share:
Read Post

Friday Summary – September 25, 2009

I get some priceless email on occasion, and I thought this one was too good not to pass along. Today’s Friday summary introduction is an anonymous guest post … if it missed any cliches I apologize in advance. Internal Memorandum To: Employees From: Mega-Corp Executive Team Subject: Messaging Data: Friday, September 25th, 2009 Due to growing media scrutiny and misquotes in major trade publication (because you people can’t get the pitch right), we ask you to be cautious in dealing with the press and analysts. Please. All we ask is that our employees “stay on message.” We, Mega-Corp, Inc., pay (way too much) to have (young imbecilic twits at) large advertising agencies (put down their drugs and alcohol long enough to) work closely with our (also over-paid) CMO to carefully develop media strategies to effectively leverage our assets to increase consumer awareness and increase sales, thus generating a win-win situation for all. We cannot allow rogue employees to torpedo our finely crafted multi-media approach by deviating from the platforms that have been developed. This will ensure that all of our clients receive the same message. It is also designed to increase brand awareness by striking just the right balance between being edgy enough to appeal to the hipper, young crowd while at the same time conveying stability to our long-term clients. It is especially important at the present, due to current market conditions. While our competitors are focused on their competitive disadvantages, we see this current market as a great opportunity for growth. Our marketing campaign has thus placed its emphasis on all of the added benefits that the new product line now offers. We will not allow iconoclastic employees to remain fixated on their complaints that we have increased prices. We believe that the new price line is a bargain for the new features that we have added, and our marketing campaign is designed to educate our clients and potential clients about the new features that they will be receiving. It does not help when our branch-office employees continuously snipe that the clients do not want the new features. That is only true because we have failed to educate them sufficiently about the benefits of those features. We are confident that if we all work together, to remain on message and educate our clients and potential clients about what we have to offer, we can expect double-digit growth over the next eight quarters. In do so we that our competitors will not be able to catch up to our technological advances during that period of time and thus we will not be forced to offer new products for the next 30 months. In fact, we are so confident of this that we have right-sized our R&D department, thus proving once again that we can be lean and mean. I also know that many of you have seen reports that suggest we plan further layoffs of up to 30%. We have no idea where the 30% figure came from and can say without equivocation that no such reduction is currently planned. Of course we will always remain nimble enough to provide reasonable value to our shareholders but I know that all of you believe that is our most sacred obligation and would want nothing less. In concluding, I cannot stress enough the importance of staying on message and delivering a united theme to the public as we continue to unveil our marketing strategy across so many different media platforms, including the new emphasis on social media. There can be no doubt that our innovative use of Twitter will in itself dramatically increase sales over the next several quarters. I look forward to seeing those who do remain with us next year when we gather via video-conferencing for the annual employee meeting. Thank you for your attention in this matter (or you will be part of the next 30%), Mega-Corp Executive Team The Death Star, Moon of Endor, Outer Rim And with that, on to the Friday Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian and Rich at BusinessWeek with the Truth, Lies, and Fiction with Data Encryption video. Rich and Martin on the Network Security Podcast, Episode 167. Oh Gn0es … I am having reservations about posting this, but it is Rich Mogull and David Mortman on stage at Defcon 17 Security Jam 2. Rich may panic and delete this link later so catch it while you can! Favorite Securosis Posts Rich: Cloud Data Security: Archive and Delete. Adrian: My post on Database Encryption Benchmarking. David Meier: So it’s not new, but I thought Building a Web Application Security program was the best internal post this week. David Mortman: Database Encryption Misconceptions. Other Securosis Posts A Bit on the State of Security Metrics Stupid FUD: Weird Nominum Interview Cloud Data Security: Share (Rough Cut) FCC Wants ‘Open Internet’ Rules for Wireless Incomplete Thought: Why Is Identity and Access Management Hard? Cloud Data Security: Use (Rough Cut) Favorite Outside Posts Adrian and Rich: We have never picked the same post before, but what Star Trek Predicts About the Future of Information Security is that good. It’s not every day bloggers get to geek out to the point we bring up “gold-pressed latinum”, or make security decisions based upon subjective personality assessments of violin-playing androids with cats. Very clever post. Meier: One word: ‘awesomesauce’. Mortman: Has Technology Killed Privacy? Top News and Posts New BeEF The best AES tutorial out there… in stick figures Remote exploits available for SMB2 vulnerability Microsoft workaround for the SMB vulnerability Bank sends mail to wrong Gmail address, then sues Google. Right, I’m sure that will work. Maybe they’ll have better luck if they spill hot Google on their lap. Great NAC white paper by Jennifer Jabbusch NSS runs a good malware test Free SSO for Google Apps? 4 Dangerous Myths about Data Disposal, Debunked talking points Why a hardware security model may not be as good as you think Blam! It was like patch Tuesday on a Wednesday for Cisco. Another reason to hate lawyers Netflix is smart. Very smart. Blog Comment of

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.