Wednesday, January 28, 2009

Analysis: Next-Gen Blade Servers

By Steven Hill

Data Center Diet Plan

Should you slim down with the newest generation of blade server? Vendors claim the latest systems offer improved flexibility and promise to reduce data center bloat. They're right -- if your infrastructure can support the power and cooling requirements. We examine the latest technology.

If the name of the data center game is getting more computing power for less, blades should be the hottest thing since South Beach. They're more manageable and deliver better TCO than their 1U counterparts--our latest testing shows as much as a fourfold increase in processor density combined with 20 percent to 30 percent power savings.

So why did Gartner Dataquest put this year's blade shipments at an anemic 850,000 units this year, just 10 percent of total server sales?

Because earlier-generation blade servers were like fad diets--long on hype, short on delivery. Despite vendor promises, they didn't represent much of a savings over conventional devices. Most of the systems we evaluated when we reviewed blade servers in June 2003 were struggling with first-generation blues--an 8- or 10-blade chassis used the same amount of rack space as equivalent IU devices and suffered I/O bandwidth limitations between blades and backplanes, making them better-suited for Web server consolidation than running critical databases.

But even then, one fact came through loud and clear: Managing blades is substantially easier than dealing with individual racked boxes.

Today, blade server designs have improved, with enough midplane throughput and modularity at the chassis to provide investment protection for their three-to-five-year lifespan. Processor density has increased, and power consumption is lower than you might expect.

They also deliver incredible flexibility. Instead of limiting the blade system to particular types of I/O--interconnect, network or storage--vendors overprovision the I/O channel, providing sufficient bandwidth for targeted environments, or let IT allocate I/O as it sees fit. Seems vendors are following the lead of core switch vendors: Make the frame big enough to jam in just about anything you want for the foreseeable future.

Enterprises are finally catching on. Blade shipments will rise to 2.3 million units by 2011, to account for almost 22 percent of all server purchases, according to Gartner Dataquest. Although blades are still more expensive than conventional 1U servers, you should see operational savings in the 30 percent range, according to Imex Research. Those changes make the newest generation of blade servers excellent candidates for high-demand core and server-virtualization applications.

That's the blade server story vendors want you to know about. The less flattering side: Even while delivering power savings, the energy demands of these high-density systems will still tax the overall infrastructures of many older--and even some newer--data centers. You may be able to quadruple the processor density of a rack, but can you power and cool it?

Many data-center managers are saying no. By 2011, 96 percent of current data-center facilities are projected to be at their power and cooling capacity limits, according to a recent survey of the Data Center Users' Group conducted by Emerson Network Power. Forty percent of respondents cited heat density or power density as the biggest issue they're facing. We examine HVAC and electrical requirements and offer design suggestions in our "This Old Data Center" special issue at and our Data Center Power Issues analyst report at nwcanalytics.com.

The Players

Most first-tier server vendors now offer blade lines. Gartner Dataquest, IDC and our readership agree that IBM, Hewlett-Packard and Dell, in that order, are the top three players in the market. IBM holds about a 10-point lead over HP, with Dell a distant third at less than half HP's share. No other single vendor is in the double digits, though judging by our testing, Dell should be watching over its shoulder for Sun Microsystems.

In an unlikely coincidence, IBM is flexing its muscle by proposing to standardize blade system modules and interconnects around its design. Although standardization would benefit the enterprise, we're not convinced IBM's proposal is the best solution; see "IBM and the Quest for Standardization" page 50).

We asked Dell, Egenera, Fujitsu, HP, IBM, Rackable Systems and Sun to send a base system, including a chassis and four x86-based server blades with Ethernet connectivity, to our new Green Bay, Wis., Real-World Labs®, where we had EqualLogic's new PS3800XV iSCSI SAN array and a Nortel 5510 48-port Gigabit Ethernet data-center switch online for testing.

We were surprised when only HP, Rackable Systems and Sun agreed to participate. See our methodology, criteria, detailed form-factor and product-testing results at nwcreports.com; our in-depth analysis of the blade server vendor landscape and poll data can be found at nwcanalytics.com.

Not only are the cooling and power improved in our new lab digs, it's actually possible to see daylight while testing, if you set your chair just right. It may have been the bright light, but once we got going, the differences in our three systems came into sharp focus.

Three For The Money

HP submitted its new 10-U C-Series BL7000c enclosure along with two full-height and two half-height ProLiant BLc server blades. Sun sent its recently released 19-U Sun Blade 8000 Modular System with four full-height Sun Blade X8400 Server Modules. Rackable Systems provided five of its Scale Out server blade modules.

Rackable blew away the competition in sheer node count. Rather than basing its design on an eight- or 10-blade chassis, Rackable goes large with a proprietary, passive rack design that supports 88 blades and has an intriguing focus on DC power. If you're in need of processors, and lots of 'em, Scale Outs may be just the ticket.

What excited us most about the blades from both HP and Sun was their potential for future expansion to the next generation of processors and high-speed I/O interfaces. The midplane bandwidth offered by these systems will be perfectly suited for the rapidly approaching 10 Gigabit Ethernet and 8-Gb/10-Gb Fibre Channel adapters and switches (for more on blades for storage, see "Storage on the Edge of a Blade," page 52). These bad boys have clearly been designed to provide investment protection.

HP also is on the verge of introducing its new Virtual Connect technology for Ethernet and Fibre Channel. Exclusive to HP and the BladeSystem C-Series, Virtual Connect modules allow as many as four linked BladeSystem chassis to be joined as a Virtual Connect domain that can be assigned a pool of World Wide Names for Fibre Channel or MAC and IP addresses for Ethernet. These addresses are managed internally and dynamically assigned by the Virtual Connect system to individual blades within those chassis. By managing these variables at chassis level rather than at blade or adapter level, individual blades can remain stateless, making it easier for system administrators to swap out failed modules or assign hot spares for failover.

Simplify My Life

Clustering for availability as well as monitoring and management are the top two blade functions, according to our poll. And indeed, vendors tout their systems' ability to dispense with third-party management software and/or KVMs as a key selling point.

Physically configuring, installing and cabling conventional servers can be a massively time-consuming process that must be done during a scheduled downtime to ensure other servers in the rack are not disrupted accidentally. Even something as simple as replacing a failed NIC or power supply can be risky business.

But with blade systems, modules are designed to be hot-swapped without the need to bring down the entire chassis. The time required for server maintenance can drop from hours to minutes, and the unified management interfaces offered by most blade systems can dramatically simplify the process of server administration from a software standpoint.

All three systems we tested offer onboard, blade-level network management interfaces as well as front-accessible KVM and USB ports for direct connections.

The tools Rackable provided offered a degree of control over any number of remote servers, but the absence of more powerful, integrated features, such as remote KVM and desktop redirection over Ethernet, is a noticeable omission in Rackable's Scale Out design when compared with chassis-based blade systems. And with its chassis-less design, Rackable doesn't offer a unified management interface.

Conversely, HP and Sun provide extremely detailed, Web-based management and monitoring capabilities at both server and chassis level. Both systems offer integrated lights-out management interfaces, for example, enabling us to do status monitoring and system configuration across all the blades in our chassis.

But HP's ProLiant was clearly the class of the management category. HP went the extra mile by adding to its Web-based controls a local multifunction LCD interface at the base of each chassis that supports all the management features found in the Web interface, without the need to connect a laptop or KVM to the chassis. At first we were unimpressed with yet another LCD display, but we were won over by the elegant simplicity and surprising flexibility offered by that little screen. The HP Insight Display interface is identical remotely or locally and offers role-based security, context-sensitive help, graphical displays that depict the location of problem components and a chat mode that lets technicians at remote locations interactively communicate with those working on the chassis. We could easily imagine how useful two-way communication capabilities would be in a distributed enterprise.

Go Green

The costs of power and cooling in the data center are at an all-time high, and it's not just the person signing the utility checks who's taking notice. In July, the House of Representatives passed House Bill 5646 on to the Senate. This measure directs the Environmental Protection Agency to analyze the rapid growth and energy consumption of computer data centers by both the federal government and private enterprise. Sponsors of the bill cited data-center electricity costs that are already in the range of $3.3 billion per year, and estimated annual utility costs for a 100,000 square foot data center at nearly $6 million.

Denser systems run faster and hotter, and the cost of running a server could well exceed its initial purchase price in as little as four years. Because the actual energy use of any blade system is dependent on a number of variables, such as processor, memory, chipset, disk type and so on, it's virtually impossible to establish a clear winner in this category. All three vendors told us their systems offer an estimated 20 percent to 25 percent reduction in energy costs over similarly equipped conventional servers, and all provide temperature-monitoring capabilities.

We were intrigued by Rackable's DC-power option, which could provide the greatest savings for enterprises that have implemented large-scale DC power distribution throughout the data center. Most of the power efficiency provided by the chassis-based blade systems from HP and Sun stems from the ability to consolidate power and cooling at rack level. But HP takes this concept a step further with its Thermal Logic technology, which continually monitors temperature levels and energy use at blade, enclosure and rack levels and dynamically optimizes airflow and power consumption to stay within a predetermined power budget.

Expensive Real Estate

Another key benefit of blade systems is the ability to pack a lot of processing power into the least amount of rack space. Of course, how well a vendor accomplishes this goes beyond number of processors--we looked at how blades were grouped and how well they're kept supplied with data.

Rackable's side-by-side and back-to-back rack provides by far the greatest processor density per square foot of floor space--as many as 88 dual-processor servers equate to 176 CPUs per rack, twice that if you count dual-core processors, putting Rackable well ahead of HP and Sun in this category.

Coming in second in the density challenge is HP. Four 10U BladeSystem c7000 enclosures fit in a single rack; each enclosure holds 16 dual-processor half-height blades, squeezing 128 CPUs into a conventional rack. The 19U Sun Blade 8000 chassis fit two to a rack, and each chassis can handle 10 four-processor server modules, for a total of 80 processors per rack.

But processor density is only part of the story.

When comparing blade systems, it's important to note the difference between two- and four-processor blades. The dual-processor options from Rackable and HP are the basic equivalent of conventional 1U and 2U servers, while each quad-processor Sun Blade X8400 matches a traditional 4U box. This configuration supports the assignment of much larger tasks on a blade-by-blade basis and makes the Sun system a better candidate for demanding applications.

Check Out Those Pipes

The modern IT environment is as much about data throughput as processing capacity. That means for blades to be competitive against conventional servers, they must keep up with high-speed fabrics, such as 4-Gb Fibre Channel, 4x InfiniBand and 10-Gb Ethernet, while still supporting multiple GbE links for management and basic network traffic.

When it comes to total backplane bandwidth, we found little difference between HP's and Sun's designs. PCIe, Fibre Channel, InfiniBand or Ethernet--it's all serial in nature. It really comes down to who's got the pipes, and in this case it's Sun.

What makes this an important IT issue is the fact that, for now, a decision to buy a given blade system locks you into a single vendor's hardware platform for a period of time. Ensuring that the chassis you purchase can accommodate future high-speed interfaces and processors provides investment protection.

Did we mention the Sun Blade 8000 has huge pipes? Its midplane can handle up to 9.6 terabits per second in combined throughput. According to Sun, this equates to 160 Gbps in usable bandwidth per blade when you add in protocol overhead and other factors.

HP's BladeSystem C-Series offers substantial 5-Tbps midplane bandwidth when measured at pure line speed, more than enough to support multiple high-speed fabrics from each blade. HP also offers additional high-speed cross-connects between adjacent blade sockets, designed to improve performance for multiblade clustered applications, as well as to support plans for future storage-specific blades.

In the case of Rackable Systems, the 2-Gbps offered by the dual GbE ports is perhaps the weakest link in the Scale Out design--this much lower bandwidth potential is a shortcoming that will probably limit use in many high-performance, high-bandwidth applications.

The other side of the I/O issue is port diversity and flexibility. Blade systems can potentially be greater than an equivalent sum of racked servers, thanks to their ability to share support of multiple fabrics and reduce cabling with integrated switch modules. The Sun Blade offered the most potential here by virtue of the remarkable bandwidth of its PCIe midplane architecture. But HP's BladeSystem C-Series currently provides by far the greatest port diversity when it comes to available backplane switches and pass-through modules.

What'll All This Cost Me?

Comparing pricing on a purely apples-to-apples basis turned out to be a fruitless quest because the variety of approaches taken by our three vendors made direct comparison difficult. To award our pricing grade, we created a formula that pitted two dual-processor, Opteron 285-based server blades from HP and Rackable against a single four-processor Opteron 885-based blade from Sun. We made sure to price each system on a blade level with similar memory, SATA storage and dual-port Gigabit Ethernet connectivity, without consideration for the differences in chassis/rack capabilities. Not perfect, but at least we're in the produce family.

Rackable's Scale Outs came in at $9,780 for two dual-processor blades, making them the least expensive option on a processor-by-processor basis. The HP BladeSystem landed in the middle, at $11,768 for a pair of ProLiant BL465c half-height blades. An equivalent Sun Blade would run $24,885 apiece when you include the cost of the two PCIe Express GbE modules required to match the port counts of the other two systems.

This was no surprise: The Sun X8400 blade is a quad-processor system, and it was a foregone conclusion that it would be more expensive than its dual-processor counterparts. In all fairness to Sun, this was like comparing a pair of two-way, 1U servers to a single four-way, 4U system; even though the processor count is the same across configurations, it costs a lot more to make a four-CPU server.

See more pricing and warranty details in "Bullish on Blades" at nwcreports.com.


Blade Servers By the Numbers:

0.7%
Worldwide market share currently held by RLX Technologies, which helped pioneer the blade server concept. Source: Gartner Dataquest

8 kilowatts
Power level above which it generally becomes difficult to cool a standard rack, because of airflow limitations in most data centers. Source: Forrester

25 kilowatts
Potential per-rack power draw if populated with dense servers or blade chassis. Source: Forrester

$0.07
Average U.S. per-kilowatt-hour cost for electricity. Source: U.S. Department of Energy

40 to 1
Reduction in cables possible by using blades rather than 1U rack servers. Source: Burton Group

7%
SMBs that use blades, compared with 25% running traditional racked servers. Source: Forrester

IBM And The Quest For Standardization

Buying into a blade system has long meant purchasing servers and interconnect modules from only one vendor. How much this worries you depends on your point of view--many companies prefer to deal with a single vendor for the support, service and pricing benefits offered by volume server purchases. Still, the perception remains that blade systems are more constrictive than their conventionally racked counterparts.

This year, IBM launched Blade.org, which it describes as a "movement" to provide a standardized design for blade system modules and interconnects that would theoretically lead to the commoditization of blade parts. The only drawback for competing vendors: These standards are based on IBM's BladeCenter concept.

Standardized hardware improves competition and makes life better for IT, and there's plenty of precedent--PCI, ATX, SATA, SCSI. But while we applaud the idea of design specifications for blade server components, we're not convinced IBM's BladeCenter system represents the best-of-breed benchmark for long-term standardization.

For our money, the new high-performance concepts behind the Sun Blade 8000 Modular System deserve serious consideration. Taking I/O modules off the blade, moving them to hot-swappable locations on the backplane and using vendor-independent PCIe Express Module technology for communications and interconnects is a more future-proof methodology than perpetuating the existing blade concept. And, it would engender more industry cooperation than the Blade.org's IBM-centric solution.

Regardless, 60 software and component vendors, including Citrix, Brocade and Symantec, have signed on to the program. It's not surprising--they have nothing to lose--and everything to gain.

Storage On The Edge Of A Blade

Given the smaller form factor of blade server modules, early-generation blades relied on internally mounted 2.5-inch laptop drives for onboard storage. But these desktop-class drives offered neither the performance nor the reliability of their 3.5-inch counterparts.

Today's blade customers favor boot-from-SAN configurations, but the growing popularity of 2.5-inch enterprise-class SAS and higher-capacity SATA drives have brought the convenience of externally accessible and hot-swappable drives to blade systems.

Because the Rackable Systems Scale Out blades we tested support full-size 3.5-inch drives, they could be fitted with as much as 1.5 TB of SATA disk per blade using dual, internally mounted 750-GB drives. Sun's blades offer two hot-swappable 2.5-inch disks per module, and Hewlett-Packard's c-Class system supports two hot-swappable 2.5-inch drives on its half-height blades and four drives on full-height blades.

In 2007, HP plans to introduce c-Class storage blades that will hold six SAS drives, supporting up to 876 GB per half-height blade and linked to an adjacent blade slot with a dedicated x4 Serial link. We can't wait.

To qualify for this review, we asked vendors to submit a base blade chassis, all software required to manage chassis and blades, and four matching server blades with extended 64-bit x86 processors and Gigabit Ethernet (GbE) connectivity for conventional network throughput and iSCSI storage.

0 comments:

Recent Posts