The Visibility Factor: Assessing Cloud Risk update from April 2010

There's a storm of change brewing. For IT, the challenge is to guide our organizations to a safe balance.

Greg Shipley

April 8, 2010

11 Min Read
Network Computing logo

A few months ago, my CFO forwarded me a rather sizable "pro-cloud" white paper that's been making the rounds in a number of non-IT executive circles. The paper, written by a venture capital team and targeting CXO readers, made some valid points. But its overarching message was an unapologetic push to use the cloud "whenever and wherever you can." The word "risk" didn't appear anywhere in the document.

The benefits of cloud computing might be real, but the blatant omission of any mention of a downside has all the hallmarks of blind hyping; we wouldn't be surprised if the authors had substantial stakes in one or more cloud providers. The paper also drives home the reality that this discussion, like it or not, is occurring far outside IT circles. In fact, some organizations are using cloud services without IT, security, or risk management teams even being aware that data is leaving the network. One organization we spoke with, for example, didn't know its employees were using Amazon's Elastic Compute Cloud services until those employees tried to expense the bills.

It was accounting--not IT--that blew the whistle.

Now, most enterprises have a hard enough time keeping track of their data, vendors, and contractors with a centralized IT department. The use of cloud-based technology by business personnel blows the centralized model apart, and herein lies the largest governance issue: Who gets to make the decision to outsource a given business function or data set? And who accepts the associated risks?

You'd think we'd have made more progress on the risk management front by now, given that the cloud hype has been spreading across enterprise IT groups for more than a year. We first polled the InformationWeek Analytics audience on this topic in February 2009. While the 547 business technology professionals who responded were intrigued by cloud computing's promise, they were equally concerned about the risks. More than half worried about security defects in the technology itself and loss of proprietary data. One year later, this dynamic still holds: In our February 2010 survey of 518 business technology pros, security concerns again led the list of reasons not to use cloud services, while on the roster of drivers, 77% cited cost savings.

"Has everyone forgotten the dot-com meltdown?" asks a senior VP for a utilities company. "Whole Web sites, along with the companies that ran them, disappeared, never to be seen again. I want to control my own future." Counters an IT professional from the education sector who has outsourced e-mail to Google: "As we grew to over 5,000 accounts, the management, backup, and maintenance got to be prohibitive. We now enjoy 99.999% reliability, up to 20 GB of space per user, and are able to deliver more services."

They're both right. Pushing some functions to a cloud provider frees both staff and computing resources to address other problems. But we need to do a better job managing risk, because make no mistake--there's as much opportunity for disaster as there is room for benefit.

Risk Calculator

Not a month goes by that we don't hear about breaches, lost media, cyberthreats, or inadvertently leaked records. Some data loss incidents result in little more than PR blemishes, while others are racking up real costs that are hitting eight and nine figures. Even the biggest kid in the cloud sky, Google, rocked the mainstream media when it announced concerns over state-sponsored hacking against it and 30-plus other companies in what is now known as the "Aurora" incident. Pundits were quick to point out that if Google, which is considered to employ some of the industry's brightest minds, has fallen victim to attacks, how safe could our data possibly be anywhere?

The point is valid, albeit potentially misleading--the unfortunate reality is that few, if any, organizations can successfully fend off all the sophisticated attacks thrown at them. When looking at cloud providers, the more relevant question is: Has this company successfully implemented the security controls necessary to manage the risks associated with our data?

Decision makers should also honestly assess whether those controls are behind, on par with, or ahead of their own because, let's be honest, we all know in-house IT security challenges remain pervasive.

We also need to gain visibility into, and assess, an array of additional data points, including the cloud provider's quality-assurance processes, service-level agreements, financial health, and dependence on other suppliers. And herein lies one of the greatest challenges: Most cloud providers don't have "transparency" as part of their vocabularies. While writing this report, we spoke with more than a dozen providers running the gamut in terms of focus, years in business, size, and level of openness. On one end of the spectrum, we were given access to key personnel, like Egnyte's CEO and Google's director of security for Google Apps. On the other end, we couldn't even get Amazon to return our calls, much less provide any shred of insight into what it is (or is not) doing security-wise. We hear similar stories from enterprise customers.

When it comes to providing visibility into key control areas, the history and type of cloud provider play a role. Some software-as-a-service vendors have been around for years (even if we didn't always call their services "SaaS"), and the more mature ones are at least familiar with customer audit questionnaires. In contrast, most platform-as-a-service providers are startups and, with a few exceptions, don't seem overly concerned about transparency. Finally, virtualization technology from the likes of VMware and Citrix has brought a mix of infrastructure-as-a-service (IaaS) players to the game. The Rackspaces, Terremarks, and SunGards of the world have been delivering IT hosting services for decades, while Amazon and GoGrid are newer providers of enterprise computing resources. Not surprisingly, the classic co-location and hosting providers that have begun embracing IaaS models are generally slower moving, but they have more experience with being audited and are comfortable providing customers some level of visibility. The new players, not so much.

If transparency becomes a critical decision point--and we think it must--those providers that can readily provide that insight stand to gain significant ground over those that can't. Quite simply, in 2010, transparency is good business.

"It is a challenging area for sure, but we try to be as open as we can with our customers about what we do," says Eran Feigenbaum, director of security for Google Apps, when asked about the challenges Google sees with its business customers. "We aren't an expert in all customer requirements, as each customer has a set that is specific to them. However, if our customers come to us with the controls that they require, we are certainly willing to consider them."

The process of per-customer inquiry and control mappings raises an interesting conundrum for large providers, however: If each enterprise brings with it a unique set of requirements or demands a tailored audit process, the overhead could quickly overwhelm even the largest vendor. For example, Google says that it's bringing thousands of new Google Apps business customers online per day. If even 1% of those organizations demanded a customized audit or a specific security control, the sheer volume of inquiries would be unmanageable--even for Google.

Moreover, identifying which security controls are needed and gaining assurance that a potential cloud provider has them in place are two entirely different activities, and therein lies a major challenge: What should IT be looking for, how does one go about getting that information, and where is the balance between the level of visibility that enterprise customers can live with and the inquiries that cloud providers can handle?

Enter The SAS 70

Most IT pros are familiar with the Statement of Auditing Standard (SAS) No. 70 from the American Institute of Certified Public Accountants. Put simply, it's an auditing standard for service organizations reporting on controls put into operation. While there are two types of SAS 70 audits, Type I and Type II, Type II is the most popular as it involves a minimal level of testing for specified controls. The output of a SAS 70 Type II audit is typically a "letter of attestation" by the auditor and a report on the "control objectives" that were reviewed during the audit.

Unlike prescriptive standards such as PCI-DSS (Payment Card Industry Data Security Standards), SAS 70 defines the process--the how--in which an audit is performed, but not the criteria--the what--that must be included during that audit. There is certainly value to a SAS 70 Type II audit, but the relevance of that value heavily depends on the controls being investigated--the what. As any IT professional who has undergone a SAS 70 will attest, you can simply remove controls that you don't want audited. Don't have desktop patching? Strike it. No security integrated into the software development life cycle? Don't have your auditor look at that. Don't run vulnerability scanners? Keep it off the objectives list.

The practice of massaging SAS 70 control objectives is bad news for security, and unlike standard controls in the accounting world, we don't have Generally Accepted Accounting Principles for IT Risk. Even worse, many of the templates that SAS 70 auditors use are based on outdated controls. For example, password strengths don't matter when an attacker can gain administrative access to critical systems via unpatched vulnerabilities. The value of a report is only as good as the controls it examines and the scope of its coverage.

Aggravating the situation, we've seen cloud providers that will provide a letter of attestation but refuse to list the SAS 70 control objectives. This is akin to saying, "Yes, we were audited, but no, we won't tell you what the auditors looked at." Fortunately, many IT professionals and mature cloud providers see the absurdity of this situation. "A SAS 70 letter of attestation without visibility into the control objectives is meaningless," says Google's Feigenbaum. We couldn't agree more.

Just like technology evaluations, the value of an audit is all about the criteria and the testing methods. So how do we make sure the right controls are looked at? Build the list beforehand using a standards-based formula.

Standards Time

Today, the IT industry is fragmented in its approach to control assessment methods and types. Some organizations maintain monstrous libraries consisting of thousands of controls. They shove voluminous questionnaires down the throats of suppliers and business partners, and when they do get responses, they often struggle with the resulting data sets. On the other hand, others don't do anything at all. For example, we spoke with cloud storage provider Egnyte, which offers organizations a cloud-based file service. Egnyte CEO Vineet Jain says that security controls are an essential part of the company's offering but only about one in 50 customers asks for any validation of those controls. Neither approach makes sense.

Keeping all critical or sensitive data and business functions away from cloud providers for the foreseeable future is the answer for one-quarter of our respondents, and it may be the least risky path for organizations that don't have good supplier assessment processes. Looking ahead, though, there's a clear mandate to ease the overhead associated with IT risk management. Companies that can build centralized systems for validating all vendors' and partners' security controls will have a leg up. Another great idea is assembling benchmarks for audit reports and methods. For example, wouldn't it be nice if there were one, easy--and dare we say, standardized--way to request, gather, and compare cloud provider letters of attestation, third-party reports, and control objectives?

A group spearheaded by Chris Hoff, technical director of the Cloud Security Alliance, and staffed with a sizable team of information security pros, is working on such a model. Called CloudAudit A6, Hoff's vision is to standardize the method of getting control attestations and reports from cloud providers, making it easier for potential customers to render informed opinions. The goal is to let cloud computing providers automate much of the audit, assertion, assessment, and assurance processes and then provide easy access for cloud consumers to access that information.

But will such a standard be adopted by the vendor community? Cloud providers like being audited about as much as IT personnel love being PowerPointed to death, and there's also a legitimate risk that too much visibility into controls may make them easier to circumvent. Just as banks aren't keen on handing out maps of their vaults, some cloud providers want to hold information about their security postures close to the vest. "The good news for customers is that [a SAS 70 report] provides insight into our controls," says Google's Feigenbaum. "The bad news is that, well, it provides insight into our controls. We think we have found a good balance of providing visibility while still maintaining security."

In addition, standards often suffer from the chicken-and-egg conundrum, as vendors are reluctant to put energy into efforts that haven't reached what they consider to be a critical mass. The CloudAudit folks point to a range of companies participating in its initiative, including Amazon, Google, Microsoft, NetSuite, Terremark, and Rackspace. But behind closed doors, we found that some cloud providers are following the typical path of paying lip service to the standards process but holding off on real commitments.

"It's a lot less attractive to us without wide customer adoption," is the sentiment we heard from one vendor. Another made it clear that it's willing to contribute to the effort but will wait before adopting it. Hoff and his team have their work cut out for them, but there's no doubt they'd be doing the industry a huge service if the standard were adopted and pushed into production. We need all the help we can get.

Greg Shipley is CTO of Chicago-based IT risk consultancy Neohapsis. Write to him at [email protected].

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights