Securosis

Research

Firestarter: An Irish Wake

We originally recorded this episode on St. Patty’s Day and thought it would be nice to send off Windows XP with a nice Irish wake, but Google had a hiccup and our video was stuck in Never Never Land for an extra day. To be honest, we thought we lost it, so no complaints. But yes, the end is nigh, all your coffee shops are going to be hacked now that XP is unsupported, yadda yadda yadda… Share:

Share:
Read Post

Webinar Tomorrow: What Security Pros Need to Know About Cloud

Hey everyone, I mentioned it on Twitter but also wanted to post it here. Tomorrow I will be giving a webinar on What Security Pros Need to Know About Cloud, based on the white paper I recently released. CloudPassage is sponsoring the webinar, but, as always, the content is our objective view. You can register online, and we hope to see you there… Share:

Share:
Read Post

Summary: DevOps Trippin’

Rich here, As technology professionals we always place bets with our careers. There is no way to really know, for certain, which sets of skills will be most in demand down the road. Yet, as with financial investments, we only have so many resources (time and brain cells) to allocate at any given time. Invest too much too early and your nifty new skills won’t be in demand. Too late and you miss the best opportunities, and are stuck playing catch-up if that’s even possible. Sometimes we make deliberate decisions, and sometimes we just sort of luck out. This week I am excited to announce my involvement as an Advisory Board member of DevOps.com. It’s something I basically fell into when I mentioned to Alan Shimmel, who founded it, that I was spending a ton of research time on DevOps and security. I never really intended to revert to my roots and start writing code and managing systems again – never mind realizing I was hooked into what may be one of the most important operational framework changes to hit IT in a long time. For me it was a series of chained intellectual and professional challenges that self-organized into a logical progression. I would love to say I planned it, but really I mostly tripped into it. It all started when Jim Reavis of the Cloud Security Alliance asked if I would be interested in building a training class for the CCSK exam. I said sure, but only if we could build some hands-on labs so security pros would learn how the cloud really works, and weren’t merely looking at architectural diagrams. I had launched some things in Amazon before, but I had never needed to create packaged, reproducible environments (the labs). Never mind ones that could hide complexity from students while still allowing them to create complete application stacks almost completely automatically. At the time I was solving problems to make labs and teach a few cloud security essentials. In the process, I was learning the foundation of techniques that underlie many DevOps processes. Total. Blind. Luck. This was before DevOps was a hot term – I just worked from problem to problem to meet my own needs. Then I refined the labs. Then I decided to create some proof of concept demonstrations of Software Defined Security techniques. Solving, in the process, some core DevOps problems that weren’t well documented anywhere. I wasn’t the first to hit the problem or come up with a solution, but no one else seemed to write it down, so I had to work my way through it from scratch. Then I started hearing more about DevOps. And as I dug in, I realized I was solving many of the same problems with many of the same tools. This is why I think DevOps is so important. I didn’t set out to “learn DevOps” – I set out to solve a set of practical implementation problems I was experiencing in the cloud, and in the process found myself smack in the middle of the DevOps ‘movement’ (whatever that is). Anyone who wants to operate in that environment needs the same basic skills, and any organizations deploying applications into the cloud will find themselves using the same techniques, to one degree or another. It is early days still, but I am not doubling down on cloud and DevOps because I think they are overhyped analyst fads. Spend some time in the trenches and you will realize there really isn’t any other way to get the job done, once you start down a certain road. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in Macworld UK on Apple security. To be honest, I think this is from an old blog post, but I’ll take it. Dave Lewis on Apple TV password disclosure. Favorite Securosis Posts Adrian Lane: Firestarter: RSA Postmortem. Mike Rothman: New Paper: Leveraging TI in Security Monitoring. Yeah, it’s my work. But there is a lot of noise about threat intelligence out there now, and much less about how to actually use it effectively. This paper looks at TI in terms of security monitoring. Rich: New Paper: Leveraging TI in Security Monitoring. Threat Intelligence was all over RSA, and Mike has been working on this research for far longer than those marketing departments. He really nails it, bringing TI from buzzwords to actionable advice. Nice. Other Securosis Posts Incite 3/12/2014: Digging Out. Advanced Endpoint and Server Protection: Quick Wins. Advanced Endpoint and Server Protection: Detection/Investigation. Favorite Outside Posts Mike Rothman: The cost of doing business at the RSA Conference. Big money. Big money. No whammies. Check out these numbers and maybe you will understand why some companies opt for a suite at the W to do meetings. Adrian Lane: To Wash It All Away. And epic rant that includes such gems as “For the uninitiated, Cascading Style Sheets are a cryptic language developed by the Freemasons to obscure the visual nature of reality and encourage people to depict things using ASCII art.” and “Here’s a life tip: when you’re confused about what something is, DON’T EXECUTE IT TO DISCOVER MORE CLUES!” Rich: Does devops leave security out in the cold? I will be writing more on this in coming days, but I think it’s safe to say this article misses the target. I guarantee you security can effectively integrate with DevOps, but not using some of the techniques mentioned in this article. Security has to fully integrate into the process. Gunnar Peterson: Ultimate Cheat Sheet for Dealing with Haters. Research Reports and Presentations Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Security Awareness Training Evolution. Top News and Posts Target Didn’t Follow Up After Hackers Tripped Its Security System. We still

Share:
Read Post

Firestarter: RSA Postmortem

We are all rested and recovered from RSA (yeah, right) and it’s time to review the week and what we think. Did we mention security is back, baby?! That’s right – it is clear budgets are now free, and the stink of desperation is fading. Here’s the video: And the audio-only version is up – we should be available for subscription in iTunes next week. Thanks, and see you next week… Share:

Share:
Read Post

Research Revisited: Off Topic: A Little Perspective

As I was crawling through the old archives for some posts, I found my very first reference to Mike here at Securosis. I timed this Revisited post to fire off when Mike’s post on joining Securosis goes live, and the title now seems to have more meaning. Off Topic: A Little Perspective This has nothing to do with security other than the fact Mike Rothman is a security analyst. Sometimes it’s worth sitting back and evaluating why you’re in the race in the first place. It’s all too easy to get caught up in the insanity of day-to-day demands or the incredibly deceptive priorities of the corporate and government rat races. A few months ago I took a step back and decided to reduce travel, stay healthy, and start this blog. I wanted a more personal outlet for writing on topics and in a style that’s inappropriate at my day job (in other words, more fun). My challenge is running this site in a way that doesn’t create a conflict of interest with my employer, and thus I don’t publish anything here that I should be publishing there. Mike just went off and started his own company to support his real priorities. You should really read this. Share:

Share:
Read Post

Research Revisited: Apple, Security, and Trust

Update: After publishing this, I realized I should have taken more time editing, especially after Apple released their iOS Security paper this week. My intention was to refer to situations where, often due to attacks, vulnerabilities, or other events, Apple is pushed into responding. They can still struggle to balance the lines between what they want to say, and what outsiders want to hear. They have very much improved communications with researchers, the media, and the level of security information they publish in the open. It is the crisis situations that knock things off kilter at times. I am sometimes called an Apple apologist for frequently defending their security choices, but it wasn’t always that way. I first started writing about Apple security because those were the products I used, and I was worried Apple didn’t take security seriously. I was very personally invested in their choices, and there were a lot of reasons when I first posted this back in 2006 to think we were headed for disaster. In retrospect, my post was both on and off target. I thought at the time that Apple needed to focus more on communications. But Apple, as always, chose their own path. They have improved communications significantly, but not nearly as much as someone like Microsoft. But at the same time they tripled down on security. iOS is now one of the most secure platforms out there (yes, even despite the patch last week). OS X is also far more secure than it was, and Apple continues to invest in new security options for users. I was right and I was wrong. Apple recognized, due to the massive popularity of iOS, that building customer trust was essential to maintaining a market lead. They acted on that with dramatic improvements in security. iOS has yet to suffer any major wide-scale exploitation. OS X added features like FileVault 2 (encryption for the masses) and GateKeeper (wrecking malware markets). Apple most definitely sees security as essential to trust. But they still struggle with communications. Not that I expect them to ever not act like Apple, but they are still feeling their way around the lines to find a level they are comfortable with culturally, which still avoids negative spin cycles like I talk about below. This post originally appeared on October 18, 2006 Apple, Security, and Trust Before I delve into this topic I’d like to remind readers that I’m a Mac user and Apple fan. We are a 2 person, 2 Mac, 3 iPod, 2 Airport Express household, with another Mac in the plans this spring. By the same token I don’t think Microsoft is evil and consider some of their products to be quite good. That said I prefer OS X and have no plans to switch to Vista, although I’ll probably run it in a virtual machine on my Mac. What I’m about to say is in the nature of protecting, not attacking, one of my favorite vendors. Apple faces a choice. Down one path is the erosion of trust, lost opportunities, and customers facing increased risk. On the other path is increased trust, greater opportunities, and happy, safe, customers. I have a lot vested in Apple, and I’d like to keep it that way. As most of you probably know by now, Apple shipped a limited number of video iPods loaded with a Windows virus that could infect an attached PC. The virus is well known and all antivirus software should stop it, but the reality is this is an extremely serious security failure on the part of Apple. The numbers are small and damages limited, but there was obviously some serious breakdown in their security controls and QA process. As with many recent Apple security stories this one was about to quietly fade into the night were it not for Apple PR. In Apple’s statement they said, “As you might imagine, we are upset at Windows for not being more hardy against such viruses, and even more upset with ourselves for not catching it.”. As covered by George Ou and Amrit Williams, this statement is embarrassing, childish, and irresponsible. It’s the technical equivalent of blaming a crime victim for their own victimization. I’m not defending the security problems of XP, which are a serious epidemic unto themselves, but this particular mistake was Apple’s fault, and easily preventable. While Mike Rothman agrees with Ou and Williams, he correctly notes that this is just Apple staying on message. That message, incorporated into all major advertising and marketing, is that Macs are more secure and if you’d just switch to a Mac you wouldn’t have to worry about spyware and viruses. It’s a good message, today, because it’s true. I bought my mom a Mac and talked my sister into switching her small business to Macs primarily because of security. I’m overprotective and no longer feel my friends and family can survive on the Internet on XP. Vista is a whole different animal, fundamentally more secure than its predecessors, but it’s not available yet so I couldn’t consider that option. Thus it was iMac and Mac mini city. But when Apple sticks to this message in the face of a contradictory reality they expose themselves, and their customers, to greater risks. Reality is starting to change and Apple isn’t, and therein lies my concern. All relationships are founded on trust and need. (Amrit has another good post on this topic in business relationships). One of the keystones of trust is security. I like to break trust into three components: Intent: How do you intend to treat participants in a relationship? Capability: Can you behave in compliance with your intent? Communication: Can you effectively communicate both your intent and capability? Since there’s no perfect security we always need to make security tradeoffs. Intent decides how far you need to go with security, while capability defines if you’re really that secure, and communication is how you get customers to believe both your intent and capability. Recent actions by Apple are breaking their foundations of trust. As a business this is a

Share:
Read Post

Research Revisited: Security Snakeoil

Wow! Sometimes we find things in the archives that still really resonate. This is a short one but I’ll be damned if I don’t expect to see this exact phrase used on the show floor at RSA this week. This was posted September 25, 2006. I guess some things never change… How to Smell Security Snake Oil in One Sentence or Less If someone ever tells you something like the following: “We defend against all zero day attacks using a holistic solution that integrates the end-to-end synergies in security infrastructure with no false positives.” Run away. Share:

Share:
Read Post

New Paper: The Future of Security The Trends and Technologies Transforming Security

This paper originally started with a blog post called Inflection. Sure, many of our papers start as a series of posts, but this time the post came long before I thought of a paper. I started seeing a bunch of interrelated trends, and what appeared to be some likely unavoidable outcomes. Unlike most predictive pieces, I focused as much on inherent security trends as on disruptive forces. Less “new attacks” and more “new ways we are doing things”. The research continued, but I never expected a chance to write it up as a paper. Out of nowhere the folks at Box contacted me to see if I had an interest in writing up and licensing something on where security is headed. I pointed them toward Inflection, and it hit exactly what they were looking for. So I got a chance to pull together the additional research I have been thinking about since that post back in 2012, and compile everything into a paper. As an analyst it isn’t often I get a chance to focus on far-field research, so I am excited to get this one out the door. This paper is also being co-released by the Cloud Security Alliance, who reviewed and approved its findings. I hope you find it useful, and please keep in mind that everything I discuss is in practice someplace today, but I expect it to take ten or more years for these practices to become widespread and their full implications to kick in. The Future of Security (Full Report, PDF) Executive Overview (PDF)   Share:

Share:
Read Post

Research Revisited: The Data Breach Triangle

This has always been one of my favorite posts, and it is one I still use regularly. I even have a slide on it in my RSA presentation for this week. The triangle still guides a lot of my thinking on data security. I am also now starting to think in terms of workload security, about which you will be hearing more soon. In this age of increased focus on egress filtering and incident response, I think the triangle does a good job of capturing a direction many security professionals realize we need to head. I originally posted this May 12, 2009. The Data Breach Triangle I’d like to say I first became familiar with fire science back when I was in the Boulder County Fire Academy, but it really all started back in the Boy Scouts. One of the first things you learn when you’re tasked with starting, or stopping, fires is something known as the fire triangle. Fire is a pretty fascinating process when you dig into it. It demonstrates many of the characteristics of life (consumption, reproduction, waste production, movement), but is just a nifty chemical reaction that’s all sorts of fun when you’re a kid with white gas and a lighter (sorry Mom). The fire triangle is a simple model used to describe the elements required for fire to exist: heat, fuel, and oxygen. Take away any of the three, and fire can’t exist. (In recent years the triangle was updated to a tetrahedron, but since that would ruin my point, I’m ignoring it). In wildland fires we create backburns to remove fuel, in structure fires we use water to remove heat, and with fuel fires we use chemical agents to remove oxygen. With all the recent breaches, I came up with the idea of a Data Breach Triangle to help prioritize security controls. The idea is that, just like fire, a breach needs three elements. Remove any of them and the breach is prevented. It consists of:   Data: The equivalent of fuel – information to steal or misuse. Exploit: The combination of a vulnerability and/or an exploit path to allow an attacker unapproved access to the data. Egress: A path for the data to leave the organization. It could be digital, such as a network egress, or physical, such as portable storage or a stolen hard drive. Our security controls should map to the triangle, and technically only one side needs to be broken to prevent a breach. For example, encryption or data masking removes the data (depending a lot on the encryption implementation). Patch management and proactive controls prevent exploits. Egress filtering or portable device control prevents egress. This assumes, of course, that these controls actually work – which we all know isn’t always the case. When evaluating data security I like to look for the triangle – will the controls in question really prevent the breach? That’s why, for example, I’m a huge fan of DLP content discovery for data cleansing – you get to ignore a whole big chunk of expensive security controls if there’s no data to steal. For high-value networks, egress filtering is a key control if you can’t remove the data or absolutely prevent exploits (exploits being the toughest part of the triangle to manage). The nice bit is that exploit management is usually our main focus, but breaking the other two sides is often cheaper and easier. Share:

Share:
Read Post

Research Revisited: The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About

This post doesn’t hold up that well, but it goes back to 2006 and the first couple weeks the site was up. And I think it is interesting to reflect on how my thinking has evolved, as well as the landscape around the analysis.   In 2006 the debate was all about full vs. responsible disclosure. While that still comes up from time to time, the debate has clearly shifted. Many bugs aren’t disclosed at all, now that there is a high-stakes market where researchers can support entire companies just discovering and selling bugs to governments and other bidders. The legal landscape and prosecutorial abuse of laws have pushed some researchers to keep things to themselves. The adoption of cloud services also changes things, requiring completely different risk assessment around bug discovery. Some of what I wrote below is still relevant, and perhaps I should have picked something different for my first flashback, but I like digging into the archives and reflecting on something I wrote back when I was still with Gartner, wasn’t even thinking about Securosis as more than a blog, and was in a very different place in my life (i.e., no kids). Also, this is a heck of an excuse to include a screenshot of what the site looked like back then. The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About As a child one of the first signs of my budding geekness was a strange interest in professional “lingo”. Maybe it was an odd side effect of learning to walk at a volunteer ambulance headquarters in Jersey. Who knows what debilitating effects I suffered due to extended childhood exposure to radon, the air imbued with the random chemicals endemic to Jersey, and the staccato language of the early Emergency Medical Technicians whose ranks I would feel compelled to join later in life. But this interest wasn’t limited to the realm of lights and sirens; it extended to professional subcultures ranging from emergency services, to astronauts, to the military, to professional photographers. As I aged and even joined some of these groups I continued to relish the mechanical patois reserved for those earning expertise in a domain. Lingo is often a compression of language; a tool for condensing vast knowledge or concepts into a sound byte easily communicated to a trained recipient, slicing through the restrictive ambiguity of generic language. But lingo is also used as a tool of exclusion or to mask complexity. The world of technology in general, and information security in particular, is as guilty of lingo abuse as any doctor, lawyer, or sanitation specialist. Nowhere is this more apparent than in our discussions of “Disclosure”. A simple term evoking religious fervor among hackers, dread among vendors, and misunderstanding among normal citizens and the media who wonder if it’s just a euphemism for online dating (now with photos!). Disclosure is a complex issue worthy of full treatment; but today I’m going to focus on just 3 dirty little secrets. I’ll cut through the lingo to focus on the three problems of disclosure that I believe create most of the complexity. After the jump that is… “Disclosure” is a bizarre process nearly unique to the world of information technology. For those of you not in the industry, “disclosure” is the term we use to describe the process of releasing information about vulnerabilities (flaws in software and hardware that attackers use to hack your systems). These flaws aren’t always discovered by the vendors making the products. In fact, after a product is released they are usually discovered by outsiders who either accidentally or purposely find the vulnerabilities. Keeping with our theme of “lingo” they’re often described as “white hats”, “black hats”, and “agnostic transgender grey hats”. You can think of disclosure as a big-ass product recall where the vendor tells you “mistakes were made” and you need to fix your car with an updated part (except they don’t recall the product, you can only get the part if you have the right support contract and enough bandwidth, you have to pay all the costs of the mechanic (unless you do it yourself), you bear all responsibility for fixing your car the right way, if you don’t fix it or fix it wrong you’re responsible for any children killed, and the car manufacturer is in no way actually responsible for the car working before the fix, after the fix, or in any related dimensions where they may sell said product). It’s really all your fault you know. Conceptually “disclosure” is the process of releasing information about the flaw. The theory is consumers of the product have a right to know there’s a security problem, and with the right level of details can protect themselves. With “full disclosure” all information is released, sometimes before there’s a patch, sometimes after; sometimes the discoverer works with the vendor (not always), but always with intense technical detail. “Responsible disclosure” means the researcher has notified the vendor, provided them with details so they can build a fix, and doesn’t release any information to anyone until a patch is released or they find someone exploiting the flaw in the wild. Of course to some vendors use the concept of responsible disclosure as a tool to “manage” researchers looking at their products. “Graphic disclosure” refers to either full disclosure with extreme prejudice, or online dating (now with photos!). There’s a lot of confusion, even within the industry, as to what we really mean by disclosure and it it’s good or bad to make this information public. Unlike many other industries we seem to feel it’s wrong for a vendor to fix a flaw without making it public. Some vendors even buy flaws in other vendors products; just look at the controversy around yesterday’s announcement from TippingPoint. There was a great panel with all sides represented at the recent Black Hat conference. So what are the dirty little secrets? Full disclosure helps the bad guys It’s about ego, control, and competition We need the threat of full disclosure or vendors

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.