Special Coverage Series

Network Computing

Special Coverage Series

Commentary

Serdar Yegulalp
Serdar Yegulalp

Many SSL Connections Missing Added Protection, Netcraft Says

Most SSL traffic doesn't use PFS, opening it up to possible decryption, according to Internet security firm.

How safe is SSL from being intercepted and decrypted? Not very, says Internet security and research firm Netcraft.

In a blog post entitled, "SSL: Intercepted today, decrypted tomorrow", Netcraft put forth a disturbing set of conclusions about the security of SSL in the real world. Spurred on by recent revelations about the National Security Agency logging and capturing encrypted traffic for possible decryption later, Netcraft pointed out how SSL (actually now referred to as "TLS") doesn't necessarily protect people in these circumstances:

"The reason that governments might consider going to great lengths to log and store high volumes of encrypted traffic is that if the SSL private key to the encrypted traffic later becomes available--perhaps through court order, social engineering, successful attack against the website, or through cryptanalysis--all of the affected site's historical traffic may then be decrypted at once. This really would open Pandora's Box, as on a busy site a single key would decrypt all of the past encrypted traffic for millions of people."

The post goes on to describe how SSL has, in theory, a protection measure against such compromises: perfect forward secrecy, or PFS, in which each SSL session is encrypted separately from every other. What's striking is how, according to Netcraft's analysis, the vast majority of SSL traffic doesn't use PFS.

Who's to blame for this?

No one particular authority, it seems; it's a by-product of the way websites, Web servers, Web browsers and end users all work together. While most Web browsers--Microsoft Internet Explorer being an exception--do support PFS, they don't always do so in conjunction with all websites, defaulting instead to less secure standards. Some websites don't even support PFS consistently: Google, for instance, supports it for some of its SSL sites, but not for others.

For perspective, I talked to Phil Pennock, a software engineer with Apcera, an early-stage software company based in San Francisco. He says Netcraft's concern over the way PFS is implemented was "fairly justified," and explained the problem further:

"TLS [the protocol that is synonymous with SSL] uses session keys for encrypting the traffic, [but] the public key crypto behind the certificates is only used to identify the end points and securely arrange those session keys. With only RSA in use [without PFS], the core secret material used to generate the session key is generated by the client and encrypted to the server's public key, so if you can recover the server's key later, you can decrypt that exchange, get the session key and then decrypt the traffic."

PFS, he explains, adds an additional layer of protection for each transaction. That said, the protection afforded by PFS is limited to data in flight, not data at rest, he says: "It does not protect against a warrant served upon the server operators, which forces them to retain all session keys and make them available." SSL also doesn't guard against the contents of the site itself being tampered with; it just protects the contents of communications in transit.

Pennock also notes that Netcraft's methodology for its investigation revolves around who decides which protocols to use for each session. Part of why the server is more often the deciding factor is because of the way higher security protocols impose more of a performance hit on both the client and server. Older, non-PFS protocols aren't as secure, but work faster.

"It's only because Google has been on the forefront of advancing TLS practice and specifications to speed up negotiation, and introduce technologies like SPDY, that it has able to now accept some performance sacrifice for security," Pennock says. He also noted that the next generation of processor chipsets--Ivy Bridge and beyond--may well make the performance issues of PFS-enabled SSL all but moot.

Should enterprises move to add PFS-enabled SSL to their servers? Yes, says Pennock, but not blindly.

One key element that must be in place when enabling PFS on the server side is entropy generation. A server must be able to produce high-quality pseudorandom numbers for the sake of good cryptography, a feature now supported in hardware on most late-model CPUs.

"Any public-facing gateway box should probably have a CPU with entropy generation," if it doesn't already have hardware entropy generation from somewhere else, Pennock says. "This affects enterprise email servers at least as much as Web servers." Servers should also have tools for internal SSL configuration reporting, and not rely on external reports to determine whether or not SSL is set up correctly.

Pennock says he finds it strange that Netcraft's blog post 'calls attention to the fact that "Russia, long-time target of U.S. spies, is the home of the developer of nginx, the Web server which uses PFS most often."

"Assuming that there must be a connection between nginx and the FSB," Pennock said, "is like assuming a connection between Apache and the NSA."

But Netcraft's basic observations' seem sound to him. Ultimately, SSL works best with PFS enabled, and site admins should do their best to make sure the majority of their users are kept as safe as possible.



Related Reading


More Insights



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

Research: 2014 State of Server Technology

Research: 2014 State of Server Technology

Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, thatís for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.