Securosis

Research

Database Encryption, Part 6: Use Cases

Encrypting data within a database doesn’t always present a clear-cut value proposition. Many of the features/functions of database encryption are also available through external tools, creating confusion as to why (or even whether) database encryption is needed. In many cases, past implementations have left DBAs and IT staff with fears of degraded performance and broken applications – creating legitimate wariness the moment some security manager mentions encryption. Finally, there is often a blanket assumption that database encryption disrupts business processes and mandates costly changes to applications (which isn’t necessarily the case). To make good database encryption decisions, you’ll first need to drill down into the details of what threats you want to address, and how your data is used. Going back to our decision tree from Part 2, look at the two basic options for database encryption, as well the value of each variation, and apply that to your situation to see what you need. Only then can you make an educated decision on which database encryption best suits your situation, if you even need it at all. Use the following use cases to illustrate where and how problems are addressed with database encryption, and to walk you through the decision-making process. Use Case 1: Real Data, Virtual Database Company B is a telephony provider with several million customers, and services user accounts through their web site. The company is considering virtualizing their server environment to reduce maintenance costs, adapt to fluctuations in peak usage, and provide more options for disaster recovery. The database is used directly by customers through a web application portal, as well as by customer support representatives through a customer care application; it’s periodically updated by the billing department through week-end batch jobs. Company B is worried that if virtual images of the database are exported to other sites within the company or to partner sites, those images could be copied and restored outside the company environment and control. The principal threat they are worried about is off-site data inspection or tampering with the virtual images. As secondary goals they would like to keep key management simple, avoid introducing additional complexity to the disaster recovery process, and avoid an increased burden for day-to-day database management. In this scenario, a variant of transparent encryption would be appropriate. Since the threat is non-database users accessing data by examining backups or virtual images, transparent encryption protects against viewing or altering data through the OS, file system, or image recovery tools. Which variant to choose – external or internal – depends on how the customer would like to deploy the database. The deciding factors in this case are two-fold: Company B wants separation of duties between the OS administrative user and the database users, and in the virtualized environment the availability of disk encryption cannot be ensured. Native database encryption is the best fit for them: it inherently protects data from non-credentialed users, and removes any reliance on the underlying OS or hardware. Further, additional computational overhead for encryption can be mitigated by allocation of more virtual resources. While the data would not be retrievable simply by examining the media, a determined attacker in control of the virtual machine images could launch many copies of the database, and has an indefinite period to guess DBA passwords to obtain the decryption keys stored within the database, but using current techniques this isn’t a significant risk (assuming no one uses default or easy to guess passwords). Regardless, native transparent encryption is a cost-effective method to address the company’s primary concerns, without interfering with IT operations. Use Case 2: Near Miss Company A is a very large technology vendor, concerned about the loss of sensitive company information. During an investigation of missing test equipment from one of their QA labs, a scan of public auction sites revealed that not only had their stolen equipment been recently auctioned off, but several servers from the lab were actively listed for sale. With the help of law enforcement they discovered and arrested the responsible employee, but that was just the beginning of their concern. As the quality assurance teams habitually restored production data provided to them by DBAs and IT admins onto test servers to improve the realism of their test scenarios, a forensic investigation showed that most of their customer data was on the QA servers up for auction. The data in this case was not leaked to the public, but the executive team was shocked to learn they had very narrowly avoided a major data breach, and decided to take proactive steps against sensitive data escaping the company. Company A has a standing policy regarding the use of sensitive information, but understands the difficulty of enforcing of this policy across the entire organization and forever. The direct misuse of the data was not malicious – the QA staff were working to improve the quality of their simulations and indirectly benefiting end users by projecting demand – but had the data been leaked this fine distinction would be irrelevant. To help secure data at rest in the event of accidental or intentional disregard for data security policy, the management team has decided to encrypt sensitive content within these databases. The question becomes which option would be appropriate: user or transparent encryption. The primary goal here is to protect data at rest, and secondary is to provide some protection from misuse by internal users. In this particular case, the company decided to use user-based encryption with key management internal to the database. Encrypted tables protect against data breach in the case that should servers, backup tapes, or disks leave the company; they also address the concern of internal groups importing and using data in non-secured databases. At the time this analysis took place, the customer’s databases were older versions that did not support separation of roles for database admin accounts. Further, the databases were installed under domain administration accounts – providing full access to both application developers and IT personnel; this access is integral to the

Share:
Read Post

Social Security Number Code Cracked

An interesting news item on how social security numbers can be guessed with surprising accuracy made this morning’s paper. Researchers say they can determine much of someone’s social security number from birth date and location. Hopefully this will shine yet another spotlight on our over-reliance on social security numbers as a method of identification. From the San Jose Mercury news: For people born after 1988 – when the government began issuing numbers at birth – the researchers were able to identify, in a single attempt, the first five Social Security digits for 44 percent of individuals. And they got all nine digits for 8.5 percent of those people in fewer than 1,000 attempts. … The predictability of the numbers increases the risk of identity theft, which cost Americans almost $50 billion in 2007 alone, Acquisti said. That is fairly accurate, all things considered. When researchers Alessandro Acquisti and Ralph Gross make their research public, just as with most efforts of this type, we will see the research community at large make improvements in the methodology and accuracy of results. And in the long run, who says that the ‘guesser’ only gets one try? What made me crack up in this news report was the Social Security Administration’s Mark Lassiter’s response that “… there is no foolproof method for predicting a person’s Social Security number,” and his statement that “The public should not be alarmed …”. Identity thieves and criminals don’t need 100% accuracy; a few million legitimate numbers ought to be sufficient. Share:

Share:
Read Post

Database Security: The Other First Steps

Going through my feed reader this morning when I ran across this post on Dark Reading about Your First Three Steps for database security. As these are supposed to be your first steps with database security, the suggestions not only struck me as places I would not start, it offered a method that I would not employ. I believe that there there is a better way to proceed, so I offer you my alternative set of recommendations. The biggest issue I had with the article was not that these steps did not improve security, or that the tools were not right for the job, but the path you are taken down by performing these steps are the wrong ones. Theoretically its a good idea to understand the scope of the database security challenge when starting, but infeasible in practice. Databases are large, complex applications, and starting with a grand plan on how to deal with all of them is a great way to grind the process to a halt and require multiple restarts when your plan beaks apart. This article advises you start your process by cataloging every single database instance, and then try to catalog all of the sensitive data in those databases. This is the security equivalent to a ‘cartesian product’ with a database select statement. And just as it is with database queries, it results in an enormous, unwieldy amount of data. You can labor through the result and determine what to protect, but not how. At Securosis, we’re all about simplifying security, I am a personal advocate of the ‘divide and conquer’ methodology. Start small. Pick the one or two critical databases in your organization, and start there. Your database administrator knows which database is the critical one. Heck, even your CFO knows which one that is: it’s that giant SAP/Oracle one in the corner that he is still pissed off he had to sign the $10 million dollar requisition for. Now, here are the basics steps: Patch your databases to address most known security issues. Highly recommended you test the patch prior to operational deployment. Configuring your database. Consult the vendor recommendations on security. You will need to balance these suggestions with operational consistency (i.e. don’t break you applications). There are also third party security practitioners who offer advice on their blogs for free, and free assessment tools that will help a lot. Get rid of the default passwords, remove unneeded user accounts, and make sure that nothing (users, web connections, stored procedures, modules, etc) is available to the ‘public’. Consider this an education exercise to provide base understanding of what needs to be addressed and how best to proceed. At this point you should be ready to a) you can document what exactly your ‘corporate configuration policies’ are and b) develop a tiered plan of action to tackle databases in descending order of priority. Keep in mind that these are just a fraction of the preventative security controls you might employ, and does not address active security measures or forensic analysis. You are still a ways off from employing more intermediate and advanced security stuff … like Database Activity Monitoring, auditing and Data Loss Prevention. Share:

Share:
Read Post

Securosis: On Holiday

As it’s the middle of summer, it’s freakin’ hot here. Rich and I have been cranking away like crazy since RSA on a couple different projects and are in need of a break. Now it’s time for a little R&R, so like you, we going on a mini summer break. That means no Friday Summary this week. We’ll be back around the 7th, and return to normal Friday posts on the 10th. Until then, enjoy yourself over the July 4th holiday (even if you’re not in the U.S.)! If you haven’t yet taken the Project Quant survey, go ahead and stop by SurveyMonkey on your way out for the long weekend. Share:

Share:
Read Post

Cracking a 200 Year Old Cipher

I have a half dozen books on Thomas Jefferson’s life, but this is a pretty cool story I had never heard before. The Wall Street Journal this morning has a story about a Professor Robert Patterson, who had developed what appears to be a reasonably advanced cipher, and sent an enciphered message to President Jefferson in 1801. He provided Jefferson with the the message, the cipher, and hints as to how it worked, but it is assumed that Jefferson was never able to decrypt the message. The message was only recently decrypted by Dr. Lawren Smithline, a 36-year-old mathematician who works at the Center for Communications Research in Princeton, N.J., a division of the Institute for Defense Analyses. The key to the code consisted of a series of two-digit pairs. The first digit indicated the line number within a section, while the second was the number of letters added to the beginning of that row. For instance, if the key was 58, 71, 33, that meant that Mr. Patterson moved row five to the first line of a section and added eight random letters; then moved row seven to the second line and added one letter, and then moved row three to the third line and added three random letters. Mr. Patterson estimated that the potential combinations to solve the puzzle was “upwards of ninety millions of millions.” After about a week of working on the puzzle, the numerical key to Mr. Patterson’s cipher emerged – 13, 34, 57, 65, 22, 78, 49. Using that digital key, he was able to unfurl the cipher’s text: “In Congress, July Fourth, one thousand seven hundred and seventy six. A declaration by the Representatives of the United States of America in Congress assembled. When in the course of human events…” I am not sure why I am fascinated by this discovery. Perhaps it’s a bit like discovering hidden treasure. Share:

Share:
Read Post

Three Database Roles: Programmer, DBA, Architect

When I interview database candidates, I want to asses their skills in three different areas; how well they can set-up and maintain a database, how well they can program to a database, and how well they can design database systems. These coincide with the three roles I would typically hire: database administrator, database programmer and database architect. Even though I am hiring for just one of these roles, and I don’t expect any single candidate to be fully proficient in all three areas, I do want to understand the breadth of their exposure. It is an indicator of how much empathy they will have for their team members when working on database projects, and understand the sometimes competing challenges each faces. While there will always be some overlap, the divisions of responsibility are broken down as follows Database administrator – Installs, configures, manages the database installation. This will include access control, provisioning and patch management. Typically provide analysis into resource usage and performance. Database architect – Selects and designs the platforms, and designs or approves schema. It’s the architect’s responsibility to understand how data is used, processed and stored within the database. They typically select which database platform is appropriate, and will make judgment calls whether or not to use partitioning, replication, and other advanced features to support database applications. Database programmer – Responsible for coding the queries and use of the database infrastructure. Selection of data types and table design, and assists with We talk a lot about database security on this blog, but we should probably spend more time talking about the people who affect database security. In my experience database programmers are the least knowledgeable about the database, but have the greatest impact on database security and performance. I have been seeing a disturbing trend of development teams, especially web application programmers, who perform every function in the application and regard the database as a bucket where they dump stuff to save application state. This is reflected in the common choice of smaller, lighter databases that provide less functionality, and the use of abstraction techniques that clean up the object model but lose native functions that benefit performance, data integrity and security. Worse, they really don’t care the details of how it works as long as their database connection driver is reasonably reliable and the queries are easy to write. Why this is important, especially as it pertains to database security, is that you need to view security from these three perspectives and leverage these other practitioner skills within the organization. And if you have the luxury of being able to afford to employ all of these three disciplines, then by all means, have them cooperate in development, deployment and maintenance of database security. You architect is going to know where the critical data is and how it is moved through the system. Your DBA is going to understand how the databases are configured and what operations would be best moved into the database. If you are not already doing it, I highly recommend that you have your DBA’s and Architects do a sanity check on developer schema designs, review any application code that uses the database, and provide support to the development in team access control planning and data processing. It’s hard to willingly submit code for review, but better fix it prior to deployment than after. Share:

Share:
Read Post

Database Encryption, Part 5: Key Management

This is Part 5 of our Database Encryption Series. Part 1, Part 2, Part 3, Part 4, and the supporting posts on Database vs. Application Encryption, & Database Encryption: Fact or Fiction are online. I think key management scares people. Application developers, IT managers, and database administrators all know effective key management support for encryption is critical, but it remains scary for most practitioners. Despite the incredible mathematical complexity behind the ciphers and the finesse required to implement those ciphers in a secure fashion, they don’t have to understand the gears and cogs inside the machine. Encryption is provided as libraries or fully functional services, so you send out clear text, and you get back encrypted data – easy. Key management worries people because if you don’t get the key management piece right, the whole system fails, and you are the responsible party. To illustrate what I mean, I want to share a couple stories about developers and IT practitioners who manage these systems. Building database applications from scratch, developers have access to good crypto libraries, but generally little understanding of key management practices and few key management resources. The application developers I know took great pride in securing database fields through encryption, but when I asked them how they stored the key, the answer was usually “in the properties file”. That meant the key was stored on the disk, unencrypted, and in a directory readable by anyone who could access the application. When I pressed the point, I was assured that the key ‘needed’ to be there, otherwise the application would not be able to get the key and thus fail to restart. I have even had developers tell me this is a “chicken vs. egg” conundrum, that if you encrypt the key you cannot access it, therefore a key needed to be kept in clear text somewhere. I kid you not, with my last employer (who, by the way, developed security products), this was the reason the ‘senior’ programmer implemented key management this way, and why he didn’t see a problem with it. The argument always ends the same: a key as a tangible object is fine, but obfuscated and hidden is not. The unspoken reason is something every programmer knows: code has bugs, and a key management bug could be devastating and unrecoverable. On the IT side, administrators I know have a different, equally frightening, set of problems with key management. Every IT manager I have spoken with has one or more of these questions: What happens if/when I lose keys? How to I back keys up securely? How do I replicate keys across multiple key servers for redundancy? I have 1,000 users reliant on public key cryptography, so how do I share these keys for all these users? If I expire and rotate keys, do I lose access to data archives? If I try to recover data from a tape, how do I get the right key? If I am using specialized key management hardware, how do I recover from fire or other disasters? All are risks in the minds of IT professionals every day. Lose your key, lose your data. Lose your data, lose your job. And that scares the heck out of people! Our goal in this section is to discuss key management options for database encryption. The introduction is meant to underscore the state of key management services today, and help illustrate why key management products are deployed the way they are. Yes, you can download excellent encryption tools for free, you can mix and match best of breed features, and you can develop your own operational key management process that is application agnostic, but this approach is becoming a rarity. And that’s a good thing because key management needs to performed by people who know what they are doing. Centralized, automated, embedded, pre-packaged and available as a complete service is the common choice. This removes the complexity and the responsibility of management, and much of the worry about reliability of your developers and IT administrators. Make no mistake, this trade-off comes at a price. Let’s dig into some key management practices for databases and how they are used. Internally Managed For database encryption, we define “internal key management” as key services within the database and provided by the database vendor. All of the relational database management platforms provide encryption packages, and included in these packages are key management functions. Typical services include key creation, storage, and retrieval, security; and most systems can handle symmetric and public key encryption options. Usage of the keys can be handled by proxy, as with the transparent encryption options, or through direct API calls to the database package. The keys are stored within the database, usually within a special table that resides in the administrative database or schema. Each vendor’s approach to securing keys varies significantly. Some vendors rely upon simple access controls tied to administrative accounts, some encrypt individuals’ keys with a single master key, while others do not allow any access to keys at all, and perform all key functions within a proxied service. If you are using transparent encryption options (see Part 2: Selection Process Overview for terminology definitions) provided by the database vendor, all key operations is performed on the users’ behalf. For example, when a query for data within an encrypted column is made, the database performs the typical authorization checks, and when successfully authorized, automatically decrypts the data for the users. Neither the users nor the application need be aware the data is encrypted, needs to make a specific request to decrypt, or needs to supply a decryption key to the database. It is all handled on their behalf, and often performed without their knowledge. Transparent key management’s greatest asset is just that: its transparency. The storage, management, security, sharing, and backup of the keys is handled by the database. With internally managed encryption keys, there is not a lot for the application to do, or even care about, since all

Share:
Read Post

Database Patches, Ad Nauseum

When I lived in the Bay Area, each Spring we had the same news repeat. Like clockwork, every year, year after year, and often by the same reporter. The story was the huge, looming danger of forest or grass fires. And the basis for the story was either because the rainfall totals were above normal and had created lots of fuel, or that the below-average rainfall had dried everything out. For Northern California, there really are no other outcomes. Pretty much they were saying you’re screwed no matter what. And no one on their editorial staff considered this contradiction because there it was, every spring, and I guess they had nothing else all that interesting to report. I am reminded of this every time I read posts about how Oracle databases remain un-patched for one, or *gasp* two whole patch cycles. Every few months I read this story, and every few months I shake my head. Sure, as a security practitioner I know it’s important to patch, and bad things may happen if I don’t. But any DBA who has been around for more than a couple years has gone through the experience of applying a patch and causing the database to crash hard. Now you get to spend the next 24-48 sleepless hours rolling back the patches, restoring the data, and trying to get the entire system working again. And it only cost you a few days of your time, a few thousand lost hours of employee productivity, and professional ridicule. Try telling a database admin how urgent it is to apply a security patch when they have gone through that personal hell! A dead database tells no tales, and patching it becomes a moot point. And yet the story every year is the same: you’re really in danger if you don’t patch your databases. But practitioners know they could be just as screwed if they do patch. Most don’t need tools to tell them how screwed they are – they know. Dead databases are a real, live (well, not so ‘live’), noisy threat, whereas hackers and data theft are considerably more abstract concepts. DBA’s and IT will demand that database patches, urgent or otherwise, are tested prior to deployment. That means a one or two cycle lag in most cases. If the company is really worried about security, they will implement DAM or firewalls; not because it is necessarily the right choice, but so they don’t have to change the patching cycles and increase the risk of IT instability. It’s not that we will never see a change in the patch process, but in all likelihood we will continue to see this story every year, year after year, ad nauseum. Share:

Share:
Read Post

SIEM, Today and Tomorrow

Last week, Mike Rothman of eIQ wrote a thoughtful piece on the struggles of the SIEM industry. He starts the post by saying the Security Information and Event Management space has struggled over the last decade because the platforms were too expensive, too hard to implement, and (paraphrasing) did not scale well without investing a pound of flesh. All accurate points, but I think these items are secondary to the real issues that plagued the SIEM market. The issue with SIEM’s struggles in my mind was twofold: fragmented offerings and disconnection with customer issues. It is clear that the data SIM, SEM, and log management vendors collected could be used to provide insights into many different security issues, compliance issues, data collection functions, or management functions – but each vendor covered a subset. The fragmentation of this market, with some vendors doing one thing well but sucking at other important aspects, while claiming only their niche merited attention, was the primary reason the segment has struggled. They created a great deal of confusion through attempts to differentiate and get a leg up. Some did a good job at real-time analysis, some provide forensic analysis and compliance, and others excel at log collection and management. They targeted security, they targeted compliance, they targeted criminal forensics, and they targeted systems management – but the customer need was always ‘all of the above’. Mike is dead on that the segment has struggled and it’s their own fault due to piecemeal offerings that solved only a portion of the problems that needed solving. More attention was being paid to competitive positioning than actually solving customer problems. For example, the entire concept of aggregation (boiling all events into a single lowest common denominator format) was ‘innovation’ for the benefit of the vendor platform and was a detriment for solving customer problems. Sure, it reduced storage requirements and sped up reporting, but those were the vendor’s problems more than customer problems. The SIEM marketplace has gotten beyond this point, and it is no longer a segment struggling for an identity. The offerings have matured considerably in the last 3-4 years, and gone is the distinction between SIM, SEM and log management. Now you have all three or you don’t compete. While you still see some vendors pushing to differentiate one core value proposition over another, most vendors recognize the convergence as a requirement, as evidenced by this excellent article from Dominique Levin at Loglogic on the Convergence of SIEM and log management, as well as this IANS interview with Chris Peterson of LogRhythm. The convergence is necessary if you are going to meet the requirements and customer expectations. While I was more interested in some of the problems SIEM has faced over the years, I have to acknowledge the point Mike was making in his post: the SIEM market is being hurt as platforms are oversold. Are vendors over-promising, per Dark Reading? You bet they are, but when have you met a successful software salesperson who didn’t oversell to some degree? A common example I used to see was some of the sales teams claiming they offered DLP equivalent value. While some of the vendors pay lip service to the ability to provide ‘deep content inspection’ and business analytics, we need to be clear that regular expression checks are not deep content analysis, and capturing network packets is a long way from providing transactional analysis for fraud detection or policy compliance. What gets oversold in any given week will vary, but any technology where the customer has limited understanding of the real day-to-day issues is a ripe target. Conversely, I find customers I speak with being equally guilty as they promote the ‘overselling’ behavior. SIEM platforms are at the point where they can collect just about every meaningful piece of event data within the enterprise, and they will continue to evolve what is possible in analysis and applicability. Customers are not stupid – they see what is possible with the platforms, and push vendor as hard as they can to get what they want for less. Think about it this way: If you are a customer looking for tools to assist with PCI-DSS, and the platform cannot a) provide the near-real time analysis, b) provide forensic analysis, and c) safely protect its transaction archives, you move onto the next vendor who can. The first vendor who can (or successfully lies about it) wins. Salesmen are incentivized to win, and telling the customer what they want to hear is a proven strategy. So while they are not stupid, customers do make mistakes, and they need to perform their due diligence and challenge vendor claims, or hire someone who can do it for them, to avoid this problem. I am very interested to see how each vendor invests in technology advancement, and what they think the next important step in meeting business requirements will be. What I have seen so far indicates most will “cover more and do more”, meaning more platform coverage and more analysis, which is a safe choice. Similarly, most continue to offer more policies, reports, and configurations that speed up deployment and reduce set-up costs. Some have the vision to ‘move up the stack’, and look at business processing; some will continue to push the potential of correlation; while others will provide meaningful content inspection of the data they already have. Given that there are a handful of leading vendors in this space on a pretty even footing, which advancement they choose, and how they spin that value, can very quickly alter who leads and who follows. The value proposition provided by SIEM today is clearer than at any time in the segment’s history, and perhaps more than anything else, SIEM platforms are being leveraged for multiple business requirements across multiple business units. And that is why we are seeing SIEM expand despite economic recession. Because many of the vendors are meeting revenue goals, we will both see new investments in the technology, and

Share:
Read Post

Database Encryption: Fact vs. Fiction

A good friend of mine has, for many years, said “Don’t let the facts get in the way of a good story.” She has led a very interesting life and has thousands of funny anecdotes, but is known to embellish a bit. She always describes real life events, but uses some imagination and injects a few spurious details to spice things up a little bit. Not false statements, but tweaking the facts to make a more engaging story. Several of the comments on the blog in regards to our series on Database Encryption, as well as some of those made during product briefings, fall into the later category. Not completely false, but true only from a limited perspective, so I am calling them ‘fiction’. It’s ironic that I am working on a piece called “Truth, Lies, and Fiction in Encryption” that will be published later this summer or early fall. I am getting a lot of good material that will go into that project, but there are a couple fictional claims that I want to raise in this series to highlight some of the benefits, weaknesses, and practical realities that come into play with database encryption. One of the private comments made in response to Part 4: Credentialed User protection was: “Remember that in both cases (Re: general users and database administrators), encryption is worthless if an authorized user account itself is compromised.” I classify this as fiction because it is not totally correct. Why? I can compromise a database account, let’s say the account that an application uses to connect to the database. But that does not mean I have credentials to obtain the key to decrypt data. I have to compromise both the database and the key/application user credentials for this. For example, when I create a key in Microsoft SQL Server, I protect that key with a password or encrypt it with a different key. MSDN shows the SQL Server calls. If someone compromises the database account “SAP_Conn_Pool_User” with the password “Password1”, they still have not obtained the decryption keys. You still need to supply a password as a parameter to the ‘EncryptByKey’ or ‘DecryptByKey’ commands. A hacker would need to guess the password or gain access to the key that has encrypted the user’s key. But with connection pooling, there will be many users keys passed in context of the query operations, meaning that the hacker must compromise several keys before the correct one is obtained. A DBA can gain access to this key if internal to the database, and I believe can intercept it if the value is passed through the database to an external HSM via database API (I say ‘believe’ because I have not personally written exploit code to do so). With the latest release of SQL Server, you can segregate the DBA role to limit access to stored key data, but not eliminate it altogether. Another example: With IBM DB2, the user connection to the database is one set of credentials, while access to encryption keys uses a second set of credentials. To gain access you need to gain both sets. Here is a reference for Encrypting Data Values in DB2 Universal Database. Where this statement is true is with Transparent Encryption, such as the various derivatives of Oracle Transparent Encryption. Once a database user is validated to the database, the user session is supplied with an encryption key, and encryption operations are automatically mapped to the issued queries, thus the user automatically has access to the table that stores the key and does not need to credentials for access. Transparent Encryption from all vendors will be similar. You can use the API of the DBMS_Crypto package to provide this additional layer of protection, but like the rest of the platforms, you must separate the implicit binding of database user to encryption key, and this means altering your program to some degree. As with SQL Server, an Oracle DBA may or may not be able to obtain keys based upon a segregated DBA role. We have also received a comment on the blog that stated “encrypting data in any layer but the application layer leaves your data insecure.” Once again, a bit of fiction. If you view the problem as protecting data when database accounts have been compromised, then this is a true statement. Encryption credentials in the application layer are safe. But applications provide application users the same type of transparency that Transparent Encryption provides database users, thus a breached application account will also bypass encryption credentials and access some portion of the data stored in the database. Same problem, different layer. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.