Saturday, January 31, 2009

Leveraging Your Infrastructure

By Mike Fratto

NAC deployments often require more integration than seen at first blush. Especially when the NAC products don't meet with expectations. Take user login/log-offs that were a problem I mentioned in my review of ConSentry's product. There are ways to mitigate problems or bolster your NAC deployments using features you already have.

The issue is that the LANShield Controller, their in-line NAC appliance, didn't detect log-offs properly and that means its simple for one user to impersonate another because the attacker is already local to the network and can easily pull a cable from a wall or PC to jack-in. ConSentry did say it is addressing this issue in a future release, but what can you do in the meantime?

First off, if you're using Windows in an Active Directory environment, and who isn't, the first thing you can do is disable login caching in AD’s Group Policy Object. By default, the workstation will cache the last 10 logins. The benefit is if a workstation can't access the Directory, the user can still login to the workstation and be productive. Or an attacker could just pull the network cable, login using AD account that had been cached, re-connect the cable, and get access to the network as the previous user. If you disable login caching by setting cached logins to zero and don't allow logins using local accounts, then if the Directory isn't available, the user can't login over the network. The downside is if the user can't login, they can't work, so you need to ensure you have a fault tolerant AD deployment or just bite the bullet. At any rate, doing so at least stops one avenue of attack.

Setting cached logins to zero on laptops won't work, however. Windows simply doesn't handle multiple user accounts gracefully (or at least I haven't found a way that it does), so if your laptops are set to zero login caches, users won't be mobile. Besides, another way to by-pass ConSentry's solution is to simply yank the cable from the workstation and replace the MAC and IP address in another workstation. We may be able to look to the network for a potential solution using features in switches that generally called IP locking which maps an IP address and MAC to a switch port and defeat moving addresses arbitrarily. Now, I haven't tested this yet, but I plan to ferret the efficacy of this proposed solution and the gotchas.

DHCP is an easy one to handle. Switches from Cisco, Extreme, and HP, to name a few, support DHCP assignment enforcement. The switches snoop on the DHCP handshake and map the offered IP address to a MAC address and a port. If the IP address shows up on another switch port, all traffic should be rejected from it. Also, when the switch port becomes inactive from a host being pulled from it, the IP address mapping is removed. Those two functions promise to thwart physical impersonation. In some cursory testing, I found that at least Windows hosts, when they lose the physical connection, will re-run DHCP when it is reconnected. I haven't explored all the various situations or other OS's. If you have experience with these features, I would love to hear about them. E-mail or post a response.

That's great for DHCP, but what about hosts with static IP addresses? Yeah, this is where IP address management gets a bit more difficult. You could statically map IP addresses to ports, but if you're in an organization of any size, static mapping becomes an awful lot of work. I don't have an answer there (do you?) except to suggest moving your workstations to DHCP or just deploy 802.1X. Generally speaking, with DJCP, hosts will continue to receive the same IP address over and over and you can statically map IP addresses to MAC addresses. That will reduce some of the management overhead once you get the static mapping rolled out.

NAC enforcement and network infrastructure will eventually merge, not necessarily to the exclusion of other NAC technologies, but network enforcement at the switch makes sense whether that means 802.1X, or purpose-built secure switches from the likes of ConSentry or Nevis Networks. When it does, maybe these quirks will just fall by the wayside. In the meantime, check out what your switches can do.

IT Hiring Outlook - 2009

Job and Salary Outlook

With the US economy tanking, the question for IT professionals is this: Is my niche relatively safe?


According to many observers, there is good news for IT folks in a number of sectors, whether they’re veterans with decades of experience or recent graduates whose skills are untested in the marketplace: If you’ve got the right tech skills and can think like a line-of-business manager, you’ll be in demand. “For sure, there is still a shortage of IT skills,” says Jeanne Beliveau-Dunn, head of the certifications group at networking vendor Cisco Systems.

But with the economy likely to shrink for a good part of 2009, will employers be able to build their IT staffs, or at least fill vacant positions? Here’s what we’re hearing around the industry.


Will IT Head Count Rise, Fall or Go Flat in 2009?

The proportion of employers increasing their IT head count will edge up from 40 percent in 2008 to a projected 43 percent in 2009, according to a survey by the Society for Information Management (SIM) published in November 2008 using data collected in June.

Jerry Luftman says this number remains valid, even with the fiasco in the financial markets. “Information systems and business executives are not panicking,” says Luftman, SIM’s vice president for academic affairs. “In previous recessions, IT was the place to cut, cut, cut.” Why not this time? Because IT has become a champion of cost-cutting across the enterprise, Luftman says.

Offshoring may be more of a threat to American IT jobs in 2009 than it has been in recent years. After trending down slightly since 2006, IT budget allocations for offshore outsourcing will jump from 3.3 percent in 2008 to 5.6 percent in 2009, the SIM survey says. Why the increase? “The economy is in a downturn, and many organizations believe they can get IT staff at a much lower cost offshore,” Luftman says. IT executives may also feel financial pressure to try offshoring, even if they have concerns about quality, he says.


Financial IT Jobs Lost and Gained

Obviously, thousands of jobs in distressed banking and financial services firms will be lost to downsizing, mergers or bankruptcies. But for the companies left standing, “even in the worst crisis on Wall Street, networks still have to perform,” says Beliveau-Dunn. For that reason, mission-critical IT operations will carry on, while many growth-oriented IT projects may be suspended or sacked, industry sources say.

But just as the Sarbanes-Oxley accounting reforms created work for IT professionals in the wake of the early-2000s corporate scandals, the current financial crisis is driving up demand for financial IT talent in select niches.

“The government has enacted new securities regulations and modified existing ones” to mitigate the effects of the financial crisis, says Ari Packer, a financial software engineer with Galatea Associates LLC in Somerville, Massachusetts. As a result, Packer and his colleagues have been putting in extra-long hours for their clients – Wall Street broker-dealers – to rework software that must continue to function well in a rapidly changing regulatory environment.


These IT Areas Are Likely to Remain in Demand

Broader areas in IT are likely to see relatively healthy employment in 2009. Through 2008, for example, “there’s been a lot of demand for people doing integration and IT people with a business background,” says Matt Colarusso, a branch manager with Sapphire National Recruiting in Woburn, Massachusetts, a unit of Sapphire Technologies.

Networking skills especially in demand for 2009 will be in three areas, according to Beliveau-Dunn: wireless communications, data center virtualization, and unified communications and collaboration. “Travel budgets have been cut tremendously, so you need to enhance your tools for [distance] collaboration,” she says.

But ultimately, negative growth will hurt IT employment across much of the economy. “To maintain a business of a certain size, you need an IT operation of a certain size,” Packer says. “If the business gets substantially smaller, so will IT.”

While IT salaries are expected to rise 3.7 percent in 2009, according to Robert Half Technology’s 2009 Salary Guide, 2009 clearly won’t be the best time to ask for a big raise. Still, certain specialists – even some new grads – may do relatively well. According to Robert Half, three examples of in-demand IT specialties and 2009 starting salary ranges are:

  • Web developers, with starting salaries between $60,000 and $89,750.
  • Programmer analysts with skills such as .Net, SharePoint, Java and PHP, who will command starting salaries of $60,000 to $100,750.
  • Tier 2 help-desk workers, starting at $36,750 to $48,250.

Friday, January 30, 2009

Windows Vista Virtualization: What You Need To Know To Get Started

By Danielle Ruest and Nelson Ruest

Microsoft's release of Windows Vista and its Service Pack 1 coincides with one of the greatest revolutions in the IT industry: the coming of virtualization technologies. VMware, Oracle, Citrix, Symantec, Sun Microsystems, Thinstall, Microsoft, and others have entered the fray to release products that are oriented towards virtualization.




(click image for larger view)

Running Windows Vista through desktop virtualization.

These products fall into two main categories.

  • Machine virtualization lets you run complete operating systems within a virtualized layer on top of physical hardware, making better use of hardware resources. This level of virtualization is proving to be a boon to organizations at many levels seeking server consolidation, desktop virtualization, disaster recovery planning, and more.
  • Application virtualization lets you "sandbox" applications so that they do not affect the operating system or other applications when deployed to a system. Application virtualization, or AppV, will make it much easier to manage application lifecycles because applications are no longer "installed" on systems, but rather, copied to systems.

Both of these technologies have a significant impact on Vista adoption. Overall, it is a good thing most organizations haven't moved to adopt Vista yet because they will be able to take advantage of virtualization in their deployment. Here's how.

Use Machine Virtualization With Vista

A major barrier to Vista adoption is the hardware required to make the most of its feature set. While the base hardware requirements for Vista are not too unusual, considering the type of hardware that is available now, they are still important. Hardware refreshes are expensive, so whether you have 10 computers or 10,000, you need to plan and budget for hardware refreshes.

The table below outlines two sets of requirements for Vista: Vista Capable and Vista Premium PC configurations. The first allows you to run the base-level Vista editions and the second lets you take advantage of all of Vista's features.

Vista Capable PC Vista Premium PC
Processor At least 800 MHz 32-bit: 1 GHz x86; 64-bit: 1 GHz x64
Minimum Memory 512 Mbytes 1 Gbyte
Graphics Processor Must be DirectX 9 capable Support for DirectX 9 with a WDDM driver, 128 Mbytes of graphics memory*, Pixel Shader 2.0 and 32 bits per pixel
Drives DVD-ROM drive
Accessories Audio output
Connectivity Internet access
* If the graphics processing unit (GPU) shares system memory, then no additional memory is required. If it uses dedicated memory, at least 128 Mbytes is required.

If you want to plan for the future, you should really opt for a Vista Premium PC. But what if you didn't have to be too concerned about hardware upgrades and could still have access to Vista's features? That is what machine virtualization can do. In fact, the common term for this process is desktop virtualization.




(click image for larger view)

A virtual machine is really just a series of files in a folder.

With desktop virtualization, you run Windows Vista inside a machine virtualization engine on a central server. Then you give users access to a virtual version of Vista through a remote connection. Users can continue to run older Windows operating systems on their actual desktops, but, through the remote session, access and use the new Vista feature set.

It is fairly easy to do this and you don't necessarily need a server to host the virtual Vista instance. Lots of manufacturers now offer machine virtualization technologies. What is even better is that many of these technologies are completely free! For example, Microsoft offers Virtual PC and Virtual Server 2005, VMware offers VMware Server, and Citrix offers XenServer Express, all for free. Others such as Oracle and Sun both offer free virtual machine engines -- Oracle offers Oracle VM and Sun offers xVM -- but their engines are not optimized for Windows operating systems, so you won't gain by using them for this purpose.

Of the three that do run Windows properly, the best choice might be Citrix XenServer Express since it is an operating system in and of itself. With the Microsoft and VMware offerings, you need to first load a supported OS on the host system, then load the virtualization engine. With XenServer, you just load XenServer, then create the virtual instances of the operating systems you need.

This arrangement can offer the best of all worlds. Here's why:

  1. When you run Windows Vista on a system, server, or PC, you need a license. All retail Vista licenses only allow one single instance of the operating system to run for each license. The Enterprise Edition, however, offers up to four virtual instances of Windows Vista for each license you own. Note that the only way to acquire the Enterprise license is through a Software Assurance program. This is often out of reach for small to medium businesses.
  2. If you use the Microsoft or VMware virtualization engines, you'll need a license for the OS running on the actual hardware system if you choose the Windows version. Then you'll need a license for each instance of Vista you want to run in a virtual instance.
  3. With VMware Server, you can choose the Linux version and run a "free" operating system on the hardware system. Microsoft does not offer a Linux version of its virtualization products.
  4. If you use XenServer Express, then you just need to load it onto a hardware platform. From then on you can create any instance of Windows Vista. Of course, each instance of Vista will require a license.

Whichever solution you choose, you'll gain lots of advantages by running Vista in a virtual machine. First, if you run it from a server, you can provide central backup and control of each machine. Second, because a virtual machine is really nothing but a series of files in a folder, it becomes really easy to create multiple machines; just copy the files and you have a new machine. Third, it becomes so much easier to protect machines because each machine is contained within itself. For example, if a virus attacks a virtual machine and corrupts it, just throw the virtual machine away and restore it from a backup. Voila! You're back to a working machine in no time. And finally, because the resources required to run the virtual machine are on the server or host machine, you don't need powerful resources at the endpoint to run Vista.

There is no doubt that machine, or rather desktop, virtualization is an attractive solution for a Vista migration. It might even be an attractive solution for the home user since you can use it to "sandbox" each Vista session and therefore protect all others. However, this is only really viable for the experienced home user.

You'll also need to keep in mind that the license for Windows Vista Home and Home Premium does not allow home users to run them in virtual machines. If you intend to use Vista in a virtual machine, it must be one of Business, Ultimate, or Enterprise and obviously, the latter wouldn't be available to home users.


Use Application Virtualization With Vista

A second major barrier to Vista adoption is application compatibility. Microsoft has modified several core components of the Windows code with Vista and, in many cases, this breaks applications. (See How to Manage Windows Vista Application Compatibility.)

If you decide that you don't want to centralize all your desktops and intend to deploy Vista on each one of your endpoints, then perhaps you need to take a really close look at application virtualization (AppV). AppV is much like machine virtualization, but instead of capturing an entire operating system installation, it captures each and every application you deploy on your systems. Basically, you "sandbox" each application so that it does not make any actual modifications at all when it runs on a system. This is all done through the use of an application virtualization agent that resides either within the application itself or on the operating system.

The single most powerful advantage AppV gives you is that, once an application is virtualized, it will run on any Windows operating system. Just think of it. Each time you move from one OS to another, you have to test all your applications, repackage them to meet target OS requirements, and then deploy them.

With AppV all that goes away since, once the application is virtualized, it will run on any Windows OS. And, because there are no changes to the target OS, you do not need to install the application, but rather simply copy it to the system. That's because AppV does not capture the application installation process like other systems do, it captures the running state of the application. That's powerful and may even warrant the adoption of AppV even if you don't migrate to Vista.

Like machine virtualization, several vendors have released AppV engines. Microsoft offers Application Virtualization 4.5. Symantec offers Software Virtualization Solution (SVS) through its Altiris division. Citrix offers AppV through Citrix XenApp (formerly Presentation Server 4.5). Thinstall offers ThinstallVS. Of these, only Symantec offers a free or personal edition of its AppV engine. This personal edition of SVS is fully functional and can be run on up to 10 PCs. What's even better is that the download site also includes over 40 pre-virtualized applications.

In many ways, application virtualization is even easier to use than machine virtualization. With AppV, the only thing you need to change is the model you use for application management. Home users can virtualize anything from Internet Explorer to full versions of Microsoft Office. Don't like what a recent Web site visit has done to your browser? Just reset the application and you're back to what you had before. This might just be the answer to what your kids need on the home PC.

Imagine, since Symantec's SVS is free for personal use, home computer manufacturers could pre-load it on their systems. Then, you could carry your applications around with you on a USB keychain. Want to do a bit of browsing, just plug in your USB and launch your favorite applications. Not that's something everyone can get their teeth around.

In the office, AppV is even more powerful. We've worked on a ton of migration and deployment projects and we know for a fact that the most time-consuming effort in any such project is application preparation. With AppV, you completely change the dynamics of any deployment project and put all of the application woes behind you. That's a powerful operating model.

There you have it: two different models that can let you move to Windows Vista at your own pace and on your own terms. Now there's no reason to delay. Move to one of the virtualization models first, then you can move to Vista once you've mastered these new IT operating models.

Resources

Thursday, January 29, 2009

Virtualization Security: A Solution Looking For A Problem?

By Mike Fratto

One of the themes coming from RSA and from vendors in the last few months is the notion that virtual servers, whether running on a hypervisor or not, are somehow more at risk that physical servers. I don't buy it entirely because servers and applications that are virtualized tend to be in tightly controlled data centers. If your data center is secure, so are your servers. Why treat virtualized servers special?

The type of security, by the way, isn't ensuring separation of data and resources within the hypervisor, rather the security problem is that traditional network security functions like firewall, IDS/IPS, and content filtering are difficult to achieve within the virtual switch itself -- interserver server communications that never cross the wire. After expressing my skepticism to a few vendors at the show, the product pitches carried a hint of desperation or aggravation (I couldn't tell which), trying to convince my why security in the hypervisor is important.

The common statement and leading questions are:

  • Well, having security near the servers is important, right? Yes, but that’s a leading question. What am I going to say, no, security near the servers is a bad idea? Thing is, a data center is unlike the rest of the network. It's a controlled environment where you should know what is happening, you don't have random users connecting to the wire, and server-to-server communications are contained within the data center. Communications passing beyond the data center perimeter can be controlled at the choke point.
  • Which leads to the statement that the reason why there is often little internal security in the data center is the cost to deploy targeted security inside the data center and the relatively high-capacity requirements, which is often multi-GB to 10 GB or more. The bang for the buck is low. However, putting security functions in the hypervisor is less expensive than hardware. Not free, just less expensive, so the cost of license fees has to be accounted for and, of course, the performance hit within the virtualized environment.
  • Virtulalization features like VMWares VMotion that allows a running VM to be moved seamlessly between hypervisors creates a far more dynamic environment than with standalone physical computers. Granted, the environment can be more dynamic, but if a company loses control of its virtualized servers, it has big problems anyway.
  • Finally, initiatives using virtualized servers to create like virtualized desktops for users is an interesting use of virtualization, but do you really want to intermingle your users with your data center? That's like plugging your access switches directly into the data center. Virtual desktops should be partitioned off from the data center and treated like any other desktop.

All of this is great in theory and I could very well be missing the threats to virtualized servers, but I really don't see any difference in risk or threats between a server or application running on bare iron versus running on a hypervisor. If your data center has good controls and is following good management processes already, those processes will apply to all servers.

Granted, there are some considerations specific to virtualization, like preventing resource starvation, ensuring the hypervisor is properly hardened, ensuring that there are effective controls to make sure that VM resources such as memory, disk, CPU instructions, etc., within the same hypervisor are partitioned.

Like anything regarding security, you need to first determine what the threat vectors are to a resource, the who and how, first, and then develop controls to mitigate the successful exploitation of the threat. Once the controls are identified, you have to determine where to employ them in a virtualized environment. Interserver communication in an n-tier application may be controlled within the network if you can guarantee that various servers will always communicate through the physical network. That is an architectural process issue. However, if interserver communications occur between servers on the same hypervisor, then a hypervisor-based integrated product may be necessary and there are several vendors like Reflex Security or Montego Networks that have products to suit and I am sure there are others. Of course, there also are host-based solutions that can be used on servers real or virtualized. Just don't get caught up in the virtualization hype. A computer is a computer and good management practices are your only patch to success.

Wednesday, January 28, 2009

Analysis: Next-Gen Blade Servers

By Steven Hill

Data Center Diet Plan

Should you slim down with the newest generation of blade server? Vendors claim the latest systems offer improved flexibility and promise to reduce data center bloat. They're right -- if your infrastructure can support the power and cooling requirements. We examine the latest technology.

If the name of the data center game is getting more computing power for less, blades should be the hottest thing since South Beach. They're more manageable and deliver better TCO than their 1U counterparts--our latest testing shows as much as a fourfold increase in processor density combined with 20 percent to 30 percent power savings.

So why did Gartner Dataquest put this year's blade shipments at an anemic 850,000 units this year, just 10 percent of total server sales?

Because earlier-generation blade servers were like fad diets--long on hype, short on delivery. Despite vendor promises, they didn't represent much of a savings over conventional devices. Most of the systems we evaluated when we reviewed blade servers in June 2003 were struggling with first-generation blues--an 8- or 10-blade chassis used the same amount of rack space as equivalent IU devices and suffered I/O bandwidth limitations between blades and backplanes, making them better-suited for Web server consolidation than running critical databases.

But even then, one fact came through loud and clear: Managing blades is substantially easier than dealing with individual racked boxes.

Today, blade server designs have improved, with enough midplane throughput and modularity at the chassis to provide investment protection for their three-to-five-year lifespan. Processor density has increased, and power consumption is lower than you might expect.

They also deliver incredible flexibility. Instead of limiting the blade system to particular types of I/O--interconnect, network or storage--vendors overprovision the I/O channel, providing sufficient bandwidth for targeted environments, or let IT allocate I/O as it sees fit. Seems vendors are following the lead of core switch vendors: Make the frame big enough to jam in just about anything you want for the foreseeable future.

Enterprises are finally catching on. Blade shipments will rise to 2.3 million units by 2011, to account for almost 22 percent of all server purchases, according to Gartner Dataquest. Although blades are still more expensive than conventional 1U servers, you should see operational savings in the 30 percent range, according to Imex Research. Those changes make the newest generation of blade servers excellent candidates for high-demand core and server-virtualization applications.

That's the blade server story vendors want you to know about. The less flattering side: Even while delivering power savings, the energy demands of these high-density systems will still tax the overall infrastructures of many older--and even some newer--data centers. You may be able to quadruple the processor density of a rack, but can you power and cool it?

Many data-center managers are saying no. By 2011, 96 percent of current data-center facilities are projected to be at their power and cooling capacity limits, according to a recent survey of the Data Center Users' Group conducted by Emerson Network Power. Forty percent of respondents cited heat density or power density as the biggest issue they're facing. We examine HVAC and electrical requirements and offer design suggestions in our "This Old Data Center" special issue at and our Data Center Power Issues analyst report at nwcanalytics.com.

The Players

Most first-tier server vendors now offer blade lines. Gartner Dataquest, IDC and our readership agree that IBM, Hewlett-Packard and Dell, in that order, are the top three players in the market. IBM holds about a 10-point lead over HP, with Dell a distant third at less than half HP's share. No other single vendor is in the double digits, though judging by our testing, Dell should be watching over its shoulder for Sun Microsystems.

In an unlikely coincidence, IBM is flexing its muscle by proposing to standardize blade system modules and interconnects around its design. Although standardization would benefit the enterprise, we're not convinced IBM's proposal is the best solution; see "IBM and the Quest for Standardization" page 50).

We asked Dell, Egenera, Fujitsu, HP, IBM, Rackable Systems and Sun to send a base system, including a chassis and four x86-based server blades with Ethernet connectivity, to our new Green Bay, Wis., Real-World Labs®, where we had EqualLogic's new PS3800XV iSCSI SAN array and a Nortel 5510 48-port Gigabit Ethernet data-center switch online for testing.

We were surprised when only HP, Rackable Systems and Sun agreed to participate. See our methodology, criteria, detailed form-factor and product-testing results at nwcreports.com; our in-depth analysis of the blade server vendor landscape and poll data can be found at nwcanalytics.com.

Not only are the cooling and power improved in our new lab digs, it's actually possible to see daylight while testing, if you set your chair just right. It may have been the bright light, but once we got going, the differences in our three systems came into sharp focus.

Three For The Money

HP submitted its new 10-U C-Series BL7000c enclosure along with two full-height and two half-height ProLiant BLc server blades. Sun sent its recently released 19-U Sun Blade 8000 Modular System with four full-height Sun Blade X8400 Server Modules. Rackable Systems provided five of its Scale Out server blade modules.

Rackable blew away the competition in sheer node count. Rather than basing its design on an eight- or 10-blade chassis, Rackable goes large with a proprietary, passive rack design that supports 88 blades and has an intriguing focus on DC power. If you're in need of processors, and lots of 'em, Scale Outs may be just the ticket.

What excited us most about the blades from both HP and Sun was their potential for future expansion to the next generation of processors and high-speed I/O interfaces. The midplane bandwidth offered by these systems will be perfectly suited for the rapidly approaching 10 Gigabit Ethernet and 8-Gb/10-Gb Fibre Channel adapters and switches (for more on blades for storage, see "Storage on the Edge of a Blade," page 52). These bad boys have clearly been designed to provide investment protection.

HP also is on the verge of introducing its new Virtual Connect technology for Ethernet and Fibre Channel. Exclusive to HP and the BladeSystem C-Series, Virtual Connect modules allow as many as four linked BladeSystem chassis to be joined as a Virtual Connect domain that can be assigned a pool of World Wide Names for Fibre Channel or MAC and IP addresses for Ethernet. These addresses are managed internally and dynamically assigned by the Virtual Connect system to individual blades within those chassis. By managing these variables at chassis level rather than at blade or adapter level, individual blades can remain stateless, making it easier for system administrators to swap out failed modules or assign hot spares for failover.

Simplify My Life

Clustering for availability as well as monitoring and management are the top two blade functions, according to our poll. And indeed, vendors tout their systems' ability to dispense with third-party management software and/or KVMs as a key selling point.

Physically configuring, installing and cabling conventional servers can be a massively time-consuming process that must be done during a scheduled downtime to ensure other servers in the rack are not disrupted accidentally. Even something as simple as replacing a failed NIC or power supply can be risky business.

But with blade systems, modules are designed to be hot-swapped without the need to bring down the entire chassis. The time required for server maintenance can drop from hours to minutes, and the unified management interfaces offered by most blade systems can dramatically simplify the process of server administration from a software standpoint.

All three systems we tested offer onboard, blade-level network management interfaces as well as front-accessible KVM and USB ports for direct connections.

The tools Rackable provided offered a degree of control over any number of remote servers, but the absence of more powerful, integrated features, such as remote KVM and desktop redirection over Ethernet, is a noticeable omission in Rackable's Scale Out design when compared with chassis-based blade systems. And with its chassis-less design, Rackable doesn't offer a unified management interface.

Conversely, HP and Sun provide extremely detailed, Web-based management and monitoring capabilities at both server and chassis level. Both systems offer integrated lights-out management interfaces, for example, enabling us to do status monitoring and system configuration across all the blades in our chassis.

But HP's ProLiant was clearly the class of the management category. HP went the extra mile by adding to its Web-based controls a local multifunction LCD interface at the base of each chassis that supports all the management features found in the Web interface, without the need to connect a laptop or KVM to the chassis. At first we were unimpressed with yet another LCD display, but we were won over by the elegant simplicity and surprising flexibility offered by that little screen. The HP Insight Display interface is identical remotely or locally and offers role-based security, context-sensitive help, graphical displays that depict the location of problem components and a chat mode that lets technicians at remote locations interactively communicate with those working on the chassis. We could easily imagine how useful two-way communication capabilities would be in a distributed enterprise.

Go Green

The costs of power and cooling in the data center are at an all-time high, and it's not just the person signing the utility checks who's taking notice. In July, the House of Representatives passed House Bill 5646 on to the Senate. This measure directs the Environmental Protection Agency to analyze the rapid growth and energy consumption of computer data centers by both the federal government and private enterprise. Sponsors of the bill cited data-center electricity costs that are already in the range of $3.3 billion per year, and estimated annual utility costs for a 100,000 square foot data center at nearly $6 million.

Denser systems run faster and hotter, and the cost of running a server could well exceed its initial purchase price in as little as four years. Because the actual energy use of any blade system is dependent on a number of variables, such as processor, memory, chipset, disk type and so on, it's virtually impossible to establish a clear winner in this category. All three vendors told us their systems offer an estimated 20 percent to 25 percent reduction in energy costs over similarly equipped conventional servers, and all provide temperature-monitoring capabilities.

We were intrigued by Rackable's DC-power option, which could provide the greatest savings for enterprises that have implemented large-scale DC power distribution throughout the data center. Most of the power efficiency provided by the chassis-based blade systems from HP and Sun stems from the ability to consolidate power and cooling at rack level. But HP takes this concept a step further with its Thermal Logic technology, which continually monitors temperature levels and energy use at blade, enclosure and rack levels and dynamically optimizes airflow and power consumption to stay within a predetermined power budget.

Expensive Real Estate

Another key benefit of blade systems is the ability to pack a lot of processing power into the least amount of rack space. Of course, how well a vendor accomplishes this goes beyond number of processors--we looked at how blades were grouped and how well they're kept supplied with data.

Rackable's side-by-side and back-to-back rack provides by far the greatest processor density per square foot of floor space--as many as 88 dual-processor servers equate to 176 CPUs per rack, twice that if you count dual-core processors, putting Rackable well ahead of HP and Sun in this category.

Coming in second in the density challenge is HP. Four 10U BladeSystem c7000 enclosures fit in a single rack; each enclosure holds 16 dual-processor half-height blades, squeezing 128 CPUs into a conventional rack. The 19U Sun Blade 8000 chassis fit two to a rack, and each chassis can handle 10 four-processor server modules, for a total of 80 processors per rack.

But processor density is only part of the story.

When comparing blade systems, it's important to note the difference between two- and four-processor blades. The dual-processor options from Rackable and HP are the basic equivalent of conventional 1U and 2U servers, while each quad-processor Sun Blade X8400 matches a traditional 4U box. This configuration supports the assignment of much larger tasks on a blade-by-blade basis and makes the Sun system a better candidate for demanding applications.

Check Out Those Pipes

The modern IT environment is as much about data throughput as processing capacity. That means for blades to be competitive against conventional servers, they must keep up with high-speed fabrics, such as 4-Gb Fibre Channel, 4x InfiniBand and 10-Gb Ethernet, while still supporting multiple GbE links for management and basic network traffic.

When it comes to total backplane bandwidth, we found little difference between HP's and Sun's designs. PCIe, Fibre Channel, InfiniBand or Ethernet--it's all serial in nature. It really comes down to who's got the pipes, and in this case it's Sun.

What makes this an important IT issue is the fact that, for now, a decision to buy a given blade system locks you into a single vendor's hardware platform for a period of time. Ensuring that the chassis you purchase can accommodate future high-speed interfaces and processors provides investment protection.

Did we mention the Sun Blade 8000 has huge pipes? Its midplane can handle up to 9.6 terabits per second in combined throughput. According to Sun, this equates to 160 Gbps in usable bandwidth per blade when you add in protocol overhead and other factors.

HP's BladeSystem C-Series offers substantial 5-Tbps midplane bandwidth when measured at pure line speed, more than enough to support multiple high-speed fabrics from each blade. HP also offers additional high-speed cross-connects between adjacent blade sockets, designed to improve performance for multiblade clustered applications, as well as to support plans for future storage-specific blades.

In the case of Rackable Systems, the 2-Gbps offered by the dual GbE ports is perhaps the weakest link in the Scale Out design--this much lower bandwidth potential is a shortcoming that will probably limit use in many high-performance, high-bandwidth applications.

The other side of the I/O issue is port diversity and flexibility. Blade systems can potentially be greater than an equivalent sum of racked servers, thanks to their ability to share support of multiple fabrics and reduce cabling with integrated switch modules. The Sun Blade offered the most potential here by virtue of the remarkable bandwidth of its PCIe midplane architecture. But HP's BladeSystem C-Series currently provides by far the greatest port diversity when it comes to available backplane switches and pass-through modules.

What'll All This Cost Me?

Comparing pricing on a purely apples-to-apples basis turned out to be a fruitless quest because the variety of approaches taken by our three vendors made direct comparison difficult. To award our pricing grade, we created a formula that pitted two dual-processor, Opteron 285-based server blades from HP and Rackable against a single four-processor Opteron 885-based blade from Sun. We made sure to price each system on a blade level with similar memory, SATA storage and dual-port Gigabit Ethernet connectivity, without consideration for the differences in chassis/rack capabilities. Not perfect, but at least we're in the produce family.

Rackable's Scale Outs came in at $9,780 for two dual-processor blades, making them the least expensive option on a processor-by-processor basis. The HP BladeSystem landed in the middle, at $11,768 for a pair of ProLiant BL465c half-height blades. An equivalent Sun Blade would run $24,885 apiece when you include the cost of the two PCIe Express GbE modules required to match the port counts of the other two systems.

This was no surprise: The Sun X8400 blade is a quad-processor system, and it was a foregone conclusion that it would be more expensive than its dual-processor counterparts. In all fairness to Sun, this was like comparing a pair of two-way, 1U servers to a single four-way, 4U system; even though the processor count is the same across configurations, it costs a lot more to make a four-CPU server.

See more pricing and warranty details in "Bullish on Blades" at nwcreports.com.


Blade Servers By the Numbers:

0.7%
Worldwide market share currently held by RLX Technologies, which helped pioneer the blade server concept. Source: Gartner Dataquest

8 kilowatts
Power level above which it generally becomes difficult to cool a standard rack, because of airflow limitations in most data centers. Source: Forrester

25 kilowatts
Potential per-rack power draw if populated with dense servers or blade chassis. Source: Forrester

$0.07
Average U.S. per-kilowatt-hour cost for electricity. Source: U.S. Department of Energy

40 to 1
Reduction in cables possible by using blades rather than 1U rack servers. Source: Burton Group

7%
SMBs that use blades, compared with 25% running traditional racked servers. Source: Forrester

IBM And The Quest For Standardization

Buying into a blade system has long meant purchasing servers and interconnect modules from only one vendor. How much this worries you depends on your point of view--many companies prefer to deal with a single vendor for the support, service and pricing benefits offered by volume server purchases. Still, the perception remains that blade systems are more constrictive than their conventionally racked counterparts.

This year, IBM launched Blade.org, which it describes as a "movement" to provide a standardized design for blade system modules and interconnects that would theoretically lead to the commoditization of blade parts. The only drawback for competing vendors: These standards are based on IBM's BladeCenter concept.

Standardized hardware improves competition and makes life better for IT, and there's plenty of precedent--PCI, ATX, SATA, SCSI. But while we applaud the idea of design specifications for blade server components, we're not convinced IBM's BladeCenter system represents the best-of-breed benchmark for long-term standardization.

For our money, the new high-performance concepts behind the Sun Blade 8000 Modular System deserve serious consideration. Taking I/O modules off the blade, moving them to hot-swappable locations on the backplane and using vendor-independent PCIe Express Module technology for communications and interconnects is a more future-proof methodology than perpetuating the existing blade concept. And, it would engender more industry cooperation than the Blade.org's IBM-centric solution.

Regardless, 60 software and component vendors, including Citrix, Brocade and Symantec, have signed on to the program. It's not surprising--they have nothing to lose--and everything to gain.

Storage On The Edge Of A Blade

Given the smaller form factor of blade server modules, early-generation blades relied on internally mounted 2.5-inch laptop drives for onboard storage. But these desktop-class drives offered neither the performance nor the reliability of their 3.5-inch counterparts.

Today's blade customers favor boot-from-SAN configurations, but the growing popularity of 2.5-inch enterprise-class SAS and higher-capacity SATA drives have brought the convenience of externally accessible and hot-swappable drives to blade systems.

Because the Rackable Systems Scale Out blades we tested support full-size 3.5-inch drives, they could be fitted with as much as 1.5 TB of SATA disk per blade using dual, internally mounted 750-GB drives. Sun's blades offer two hot-swappable 2.5-inch disks per module, and Hewlett-Packard's c-Class system supports two hot-swappable 2.5-inch drives on its half-height blades and four drives on full-height blades.

In 2007, HP plans to introduce c-Class storage blades that will hold six SAS drives, supporting up to 876 GB per half-height blade and linked to an adjacent blade slot with a dedicated x4 Serial link. We can't wait.

To qualify for this review, we asked vendors to submit a base blade chassis, all software required to manage chassis and blades, and four matching server blades with extended 64-bit x86 processors and Gigabit Ethernet (GbE) connectivity for conventional network throughput and iSCSI storage.

Tuesday, January 27, 2009

Line of Sight (LOS)

Line of Sight can be broken into 2 categories:

Visual Line of Sight:

Visual line of sight must be achieved. When standing at the antenna position, you must be able to see the remote antenna.

Radio Line of Sight:

Radio line of sight must be achieved. It is defined as a football-shaped pattern known as the Fresnel Zone, which must be kept clear of obstructions. If you are unable to maintain radio line of sight, you must realign or increase mast height of both antennas until you achieve a quality RF Link.




Rolling Review Introduction: Switching Infrastructure

By Mike Fratto

Sooner or later, most IT pros land on the pointy end of a switch upgrade. But if you simply re-up with your existing vendor—especially if that's the market leader—you could miss a prime opportunity to enhance your network via cutting-edge technology at a price that beats the competition.


Of course, whiz-bang can't come at the expense of dependability: When we asked network admins why they're upgrading their switch architectures, 56% named reliability as the main driver, followed by more bandwidth at the core and access layer. This need for speed is reflected in a recent Infonetics report that predicts sales increases of 10% in Gigabit Ethernet ports and a doubling of 10 Gigabit Ethernet port sales. That doesn't surprise us: The 36% premium for a Gigabit Ethernet port over a 10/100 port, roughly $63, is chump change when you consider the extra bandwidth and network future-proofing.

While Cisco is the undisputed market leader in terms of units shipped, that doesn't mean it has a lock on new technologies, service and support, reliability, or cost. Switches from rival vendors, such as Alcatel-Lucent, Extreme Networks, Foundry Networks, Hewlett-Packard ProCurve and 3Com, compete feature for feature. HP's policy of free firmware upgrades for the ProCurve switch line, for example, is a huge benefit if you let your support contact lapse or purchase used equipment.

To examine what vendors are offering for switching gear, we created an RFI for a wholesale switch infrastructure upgrade. We based the request on our fictional fast-food purveyor, TacDoh, which debuted back in 2003 when it went in search of outsourced network management.

Like many enterprises, TacDoh has grown organically through mergers and acquisitions and comprises an eclectic mix of new gear and older equipment that's still chugging along. Devices have been replaced as needed, but this piecemeal approach means the company isn't taking advantage of the latest technology. That's a problem because bandwidth and security needs are rising like one of TacDoh's signature pastries.

We built an RFI laying out a five-year plan. First, we specified migrating to VoIP from existing Centrex service, mandating a robust, scalable network. We also want power over Ethernet and to take advantage of newer monitoring technology, like flow-based analysis. Finally, we're investigating network access control and other security features to mitigate the damage caused by worm outbreaks, rogue access points and DHCP servers, and other malicious activity. Of course, the ability to scale capacity to meet new demands and ensure resiliency is crucial.

Kid In A Doughnut Store
Today's switches have features that enhance everything from port configuration to traffic control. In fact, you'd be hard pressed to find a pure Layer 2 switch—Layer 3 routing replete with multiple protocols like RIP, OSPF and BGP is the norm. We would certainly consider replacing our core router, but we don't need routing at the distribution and access layers.

VLANs are an effective way to segment the network based on where employees are, however, statically assigning ports to a VLAN is only slightly less cumbersome than moving patch cables. We want to gain the efficiency inherent in a single switch architecture that can be managed via a central console, simplifying adds, moves and changes, not to mention deployment and backup configuration.

As part of our efficiency push, automation features like Link Layer Discovery Protocol (LLDP) and LLDP-Media Endpoint Discovery (LLDP-MED) will ease the transition to VoIP phones. Rather than having to manually map phone locations and configure switch ports as phones move to different locations on the network, LLDP-MED can discover endpoints, determine configuration parameters like VLAN assignment and power requirements, and gather the location information that is used to locate a phone in case of emergency.

Security features are being built into switches at a dizzying rate as well. Beyond 802.1X port-based authentication, new switches are capable of detecting anomalous traffic, like rapid increases in utilization, scan and worm activity, ARP spoofing, and other low-level ills. More importantly, some devices can dynamically map DHCP leases to MAC addresses and ports, even deny nodes host access if they didn't complete a DHCP exchange, thereby thwarting users who statically assign IP addresses to get around DHCP. In addition, 802.1X is being enhanced with the capability to authenticate multiple hosts on the same port, even have the switch port act as an 802.1X supplicant for MAC address authentication (see more on 802.1X).

We certainly want to avoid blocking legitimate access, but the more protection we can place out at the edge, the more effective our security will be.

Fancy features notwithstanding, redundant, hot swappable hardware is critical to ensure resiliency and flexibility. Network resiliency is enhanced by Layer 2 technologies like link aggregation, spanning tree and rapid spanning tree, to quickly reroute connections in case of link or switch failure.

Finally, we may need to support multiple hosts on a port where the downstream device, like a hub or older access switch, doesn't recognize 802.1X. Many vendors claim support for multiple authenticated hosts on a port, but that could mean the port state is based on the first successful authentication. Advanced features like per-host authentication and configuration via ACL, VLAN assignment, and QoS all offer granular control.

If we get everything we want, then it comes down to price and support as TacDoh looks to balance feature sets with capital costs and maintenance—cheaper upfront isn't always the best long-term deal when you factor in support and costs for hot-spare parts. We asked for list price to keep our analysis on an even footing, but switching is by and large a commoditized market; the days of paying list for hardware have long since passed. From talking to administrators, expect to lop 15% to 25% off list, depending on your purchasing power.

Mike Fratto is Lead Analyst for the NAC Immersion Center and is Managing Editor/Labs for InformationWeek.

Sidebar: Playing Nice With 802.1X
While 802.1X port authentication ensures that only authenticated users can access the network, it's not without its headaches and can, in fact, be the bane of automation. In a perfect world, you'd be able to plug any device into any port and the port would respond properly. However, an 802.1X port in an unauthenticated state, by default, denies all traffic. Protocols like LLDP and LLDP-MED, the link layer discovery protocols that are used by IP phones to request configuration information, can't pass LLDP traffic unless they authenticate first, for example, and other protocols, like Wake-On-LAN and the PXE boot agents used to automate desktop deployments, are equally affected.

Several strategies can enable automation in an 802.1X environment. In smaller networks where you control physical access, you can manually define which ports are 802.1X-enabled and which aren't, and ensure that hosts are connected appropriately. However, ensuring physical connections is difficult when you have a lot of hosts. Most switches can be configured to place a port into a default VLAN if a supplicant isn't responding to 802.1X, or a port may be moved to a VLAN and opened if 802.1X fails authentication. Alternatively, MAC-based authentication can be used to get an IP phone online.

If you plan to roll out network access control, 802.1X is often a good choice for enforcing control. As more companies upgrade their switching and gain experience with 802.1X, we expect to see broader adoption. However, there's no guarantee that guests will have 802.1X supplicants installed, so alternative authentication measures like a Web portal or redirect that forces a user to authenticate to the switch is useful.

THE INVITATION:
TacDoh is a worldwide purveyor of deep-fried delights sold through major retail outlets. Our corporate office contains sales support, marketing, R&D, and centralized IT. Three branch offices provide localized support for sales. Employee productivity is a critical TacDoh competitive advantage and is fueled by a well-connected network and application infrastructure. Our LAN served TacDoh's data needs well, but has grown overtime with infrastructure sourced from multiple vendors. The need to leverage network dollars mandates a complete network redesign. TacDoh is searching for a new strategy and design and is very interested in the flexibility, quality of service, availability, and security features in new enterprise switches.

Change and growth are key elements the new network will have to support. Maintaining site connectivity and application support are crucial; in addition, the winning RFI will support the increasing changes forced onto the TacDoh network. We upgraded our cabling to Cat-5E a few years ago and are unlikely to perform another upgrade for a few more years. Generally speaking each desk has a single network port for a user's PC. We will run fiber between wiring closets and the data center if needed.

We have pilot projects which will be moved into deployment in the next six months. We want to prepare our LAN network in advance by:

• Replacing our PBX with VoIP to all desktops in corporate and remote offices.

• Embracing unified communications to better manage meetings and collaboration. This includes more use of real-time media, both broadcast and point-to-point.

• Adding network access control. We haven't decided product or technology, but we want our infrastructure to support whatever we choose.

• Centralizing all servers into the data center, eliminating departmental application servers.

The network supports voice, video, SAP transactions and Lotus Notes. Voice includes IP trunking as well as telephony for call processing. Voice is accomplished using SIP-based phones at each desk. Video streaming has been used for companywide broadcast events, but we are exploring adding video for collaboration. Application sharing is also a high priority; TacDoh's customer-facing applications are located in the data center. Additionally, the company runs its own instant messaging server and supports employee access to the Internet. Internet traffic, however, is filtered and monitored, in accordance with corporate policy.

Our data center consolidation project is driven by a need to reduce costs and centralize data for management and regulatory reasons. That makes data center availability critical to our IT plans. The chosen network design must increase the fault tolerance of our data center. In addition, we measure service levels for network performance, defined by availability, jitter, error rate and throughput. Network performance is used to assess the effectiveness of our IT infrastructure. The vendor should provide a network design and explain how its solution will maximize performance.

Our Objectives
We want our new network to support our IT plans for the next five years. We are adding more employees and more applications that are consuming bandwidth on the network. Equally important, our real-time media initiatives must have good response times across the LAN. We are not, however, planning on adding more IT staff, so automation and integration into our support systems are critical. We want to achieve the following goals:

• Unify our infrastructure to simplify management and deployment.

• Better support real-time media like voice and video.

• Support network access control so that security isn't compromised by roaming users.

• Leverage enhanced switch services to realize an easily managed network.

• Support capacity increases as we centralize our data center and as more data is pushed across the network. • Plan for growth. We expect to double our workforce in 24 months as we expand our product line and branch out into related ventures.

Monday, January 26, 2009

Build An Automated, Modular Data Center

By Art Wittmann

You know those guys who start out with a stack of plates and a handful of sticks and eventually manage to get a bunch of platters spinning away? Most IT organizations, unfortunately, manage their data centers an awful lot like that. Each application requires care to get it loaded, you tweak and prod it until finally it's spinning, then you don't dare do anything but fine-tuning ... which you have to do constantly.

In fact, most IT departments look sillier than those sideshow guys because none of IT's plates look the same, and most require specially trained administrators to get them on the stick in the first place.

No wonder it's become fashionable to question the value of IT: Where most enterprises have elevated their core business to a science, IT has largely remained a dark art, with each application and installation uniquely imagined and implemented.


In the data center, uniqueness and specialization have resulted in waste--in servers, storage, power and cooling systems, and perhaps most egregiously in dedicated labor. A clue to the future comes from those who run data centers as their core business: Managed hosting provider Rackspace has standardized on 2U (two rack unit) servers and strives to minimize any variations. Today it might be a Web server, tomorrow it might be running SQL Server, and the next day it might be used in an Exchange cluster. On top of its fairly generic hardware, Rackspace is increasingly using VMware virtual machines and VMotion management software to flexibly meet the needs of its customers. Think of it as an automated plate spinner.

CONFORMITY IS COOL
Standardization and modularization go beyond the same server form factor. While it may have made sense in the mainframe era to design and build data center physical systems to meet the unique needs of the installation, it makes zero sense now. With standard-size racks housing standard-size servers, you can certainly meet your needs with standardized power and cooling systems.

Almost every power and cooling vendor today will work with you to preconfigure systems. Just like car shopping, you pick the model and color and choose from a few option packages; the rest is standardized. The result is a system that's less expensive to buy and own and that behaves far more predictably than its custom counterparts--so much so that the physical room itself need not even have been designed to be a data center.

Particularly for small and midsize data centers--say, those less than 3,000 square feet--the raised floor may no longer be necessary, and in fact you may be far better off without one. In-row and rack-based cooling systems provide the modularity needed and can be deployed in almost any interior room. It can be as simple as this: Get the physical security right, make sure you can pull enough power and access for external chillers, then have your modular data center dropped off at the loading dock.

Poking Cisco In The Eye

By Andrew Conry-Murray

Cisco frowns on resellers of used network hardware because it doesn’t get a cut of aftermarket sales. Network Hardware Resale (NHR), a prominent reseller, is going a step further by offering an alternative to Cisco’s SMARTnet maintenance service -- a key revenue source for the networking giant.

Called NetSure, NHR's service offers 24x7 technical support from Cisco-certified technicians and next-day hardware replacement. NHR claims the service costs 50% to 90% less than a SMARTnet contract. While NHR sells new and used hardware from a variety of vendors, Cisco accounts for 85% of its business.

NetSure is a jab at Cisco because it encourages companies to use, or continue using, second-hand gear. It also lets customers hold on to end-of-life equipment that Cisco no longer supports, rather than purchase new products.

NetSure also is part of an ongoing effort to legitimize used gear. Cisco paints the secondary market as run by hucksters and awash in stolen and counterfeit products. It has a point: The U.S. Customs and Border Protection agency recovered more than $14 million worth of counterfeit computer hardware in 2006, including switches, routers, and interface cards. And a lack of trust in used gear was the No. 1 reason IT won't buy from the secondary market, according to an InformationWeek survey.

NHR counters that reputable resellers have mechanisms in place to spot counterfeits and keep them out of circulation. It also points to reseller organizations such as UNEDA that strive to maintain a high level of business integrity and product quality among its members.

The fact is, the market for used gear is alive and well. That same survey shows 45.5% of respondents occasionally buy used equipment, and almost 15% do so regularly. If NetSure is a success, those numbers may grow.

Saturday, January 24, 2009

Lightning Protection?

Whenever you are installing equipment on a tower, you always need to give some thought to lightning protection. A lightning strike on your tower can take out not only your radio, but every computer connected to your network! It is not possible to completely eliminate the risk (a direct strike on your antenna will probably kill your radio no matter how good your protection), but with some precautions, you can reduce it significantly.

During a thunderstorm, electrical charge fields of several thousand volts can build up in an area that can be several kilometers wide. The goal of your lightning protection devices is to discharge this field to ground without going through your radio equipment. This is generally done by a spark gap that starts conducting when subjected to to a voltage above about 500 V, at which point it continues to conduct until the voltage goes away. These protection devices are rated according to how much energy they can discharge in a single incident, and whether they are reusable. The best protectors are gas discharge tubes; they are also quite expensive.


Our radios have a basic lightning protection circuit incorporated, but we strongly recommend that you install the additional protection device that we offer for each model.

Making 10GE Green – 10GBASE-T and Wake on LAN

Written by Bill Woodruff

Minimizing power sometimes comes from unexpected places. As with most ICs, 10GBASE-T silicon entered the market at a relatively high power, but with process shrinks, it is seeing a geometric reduction in power over time. So how can a data center become greener by taking advantage of today’s sub-6W 10GBASE-T PHYs?

Leverage from Virtualization
Virtualization offers the capability to dynamically direct which physical server to assign workloads to. As the workload within a data center ebbs and flows, this capability to perform dynamic consolidation can deliver a great benefit.

The Green Grid consortium has identified putting servers into sleep state as one of the Five Ways to Reduce Data Center Server Power Consumption. Examine how the power consumed in a server varies based on the workload on that server. A good example was discussed in the session at WinHEC2008 titled “Windows Server Power Management Overview” (Figure 1). Power does decrease with decreasing workload. However the server’s power at idle only dropped to 65% of the power at full workload. Clearly there is great leverage -- an additional 65% reduction from full-load power -- in reducing power consumption if the server can be transitioned into sleep state.

10geslide1.jpg


Sleep States and Wake on LAN
Early efforts at power management recognized the importance of retaining the ability to perform certain tasks off-hours. Wake on LAN (WoL) provided the ability for network-directed activities, such as backup and maintenance, to be performed on PCs that have been put into a standby or hibernate state at night.

Industry-standard interfaces that enable OS-directed power management are defined by the Advanced Configuration and Power Interface. ACPI defines active states and sleep states (e.g., Standby, S3, retains memory power; Hibernate, S4, copies RAM to disk). As an interface standard, ACPI does not define implementation.

When entering a sleep state, the system begins a power-down process, turning off power to the processor and other elements of the system -- but not to the network interface when WoL is enabled. PCISIG has defined a separate auxiliary supply, Vaux, which provides 375mA at 3.3V. Given the low power available, one step in the process of invoking a sleep state is to reduce the link speed to the lowest speed possible. A 10GBASE-T PHY will break its 10GE link and re-establish the link at 100BASE-TX. A network adapter will also endeavor to segment functionality to remain under this Vaux power limit.

While in sleep state, the network adapter will monitor the link for a “magic packet”. When the magic packet is received, the process for exiting the sleep state will begin. Part of that process will be to return full power to the network adapter, and to re-establish the link at 10GBASE-T.

10GE Never Sleeps?
Is it true that 10GE never sleeps? This may no longer be important. Today’s servers typically have three separate network connections, one for the control plane, one for Ethernet traffic, and a third for Storage. LAN on Motherboard (LOM) 1000BASE-T ports typically drive the control plane, while NICs support 10GE Ethernet and HBAs support storage. Since WoL is a control plane function, the 10GE links not required during sleep states are powered off.

The converged network changes things. Moving control and storage connections onto the Ethernet port reduces cabling and complexity greatly. However the constraints of today’s technology practically eliminate the ability for 10GE to support sleep states and WoL.

It’s not that 10GE can’t monitor and flag a magic packet. That would be the simple part of the equation. The challenge is the Vaux power limit, and working within the constraints set by ACPI and PCISIG. Today’s 10GE NICs, HBAs and CNAs push up against the 25W upper limit for a PCIe card. When the server shifts to a sleep state and gates off all supplies, Vaux will provide only a bit over 1W. Monitoring for the magic packet at that power is not within the capabilities of today’s 10GE technology.

Partitioning Enables WoL
So how can we get 10GE to work on a 1 watt budget? Optical alternatives with SFP+ offer lower power than 10GBASE-T, however this path offers little hope for WoL.

The typical data path in an SFP+ based system will include the optical module itself (about a watt), an EDC chip (another watt) and the adapter silicon configured to only monitor for the magic packet. The power of the optical module and the EDC chip will be lower in the GE mode, as will the adapter silicon. But even though the adapter silicon only needs the most rudimentary MAC logic to monitor for the magic packet, these are complex devices where power can be dominated by leakage. Count on the MAC itself to require well over 1 watt, even when shifted to 100M speeds. Thus optical SFP+ based systems are ill suited to operate off of Vaux.

10GBASE-T can, in fact, get around this limitation, even when 10GBASE-T systems use the same stable of MAC silicon. The solution is to place some simple MAC monitoring functionality into the PHY itself.

The market has recently seen the debut of triple speed 10GBASE-T PHYs designed to enable the converged network to support WoL, through the following steps:

• Initiate the process to enter the sleep state. Break the 10GBASE-T link and re-establish the link at 100BASE-TX.
• Instruct the transceiver to enter WoL mode. In this mode, all unnecessary elements will be gated off, including SGMII. No external signals will be driven except for the GPIO which provides the interrupt upon receipt of the magic packet.
• Enable the soft switch on the NIC, which gates the supplies for the PHY. The system Vaux 3.3V supply will now supply the PHY as the other PCIe bus power supplies are removed.
• Monitor the traffic on the line. As an extension of its ability to monitor 10GBASE-T traffic for basic statistics and robustness, the transceiver also watches for a magic packet at 100BASE-TX.
• Upon detection of a magic packet, flag an interrupt on a GPIO pin. A controller on the NIC will need to initiate the process of pulling the server out of its sleep state. Note that the content of the magic packet to flag can be set by the user.

How Green is it?

Let’s look at the impact in a data center by examining one scenario.

• Active power per server at 400W (based on configuration and number of processors, power can range from below 200W to over 600W)
• Power at Idle at 50% of active power, 200W (actual depends on configuration)
• Power at Sleep, 2W, power savings entering sleep state of 198W (Vaux power ‘rounded up’ to 2W)

Given that each server entering a sleep state represents a 198W savings, the “green” factor becomes the ratio of servers in a sleep state to the total number of servers. A data center with 1000 servers, which employs dynamic consolidation to reduce the average number of active servers by 20%, will save about 40KW by employing 10GBASE-T vs. SFP+.

But what about the artifact that 10GBASE-T has higher power than SFP+? A comparison can be made to discover the crossover where the power savings from putting a server into sleep state exceeds the higher power (for the time being) of 10GBASE-T over SFP+. Assume a 10GBASE-T PHY at 6W and an EDC chip at 2W. Note that these power values will be on each end of the link, giving a direct-attach copper link an 8W power advantage over the 10GBASE-T link. This 8W power advantage for SFP+ quickly pales compared to the 198W power advantage for a server entering a sleep state (Figure 2). Once dynamic allocation provides for a consolidation of more than 4%, the “greening” of the data center will accelerate and quickly become material.

10geslide2.jpg


10GBASE-T and Green, only getting better

In the example just cited, the 10GBASE-T PHY extends conventional power management technologies to 10GE. Enabling dynamic consolidation with WoL is an important part of implementing an energy efficient strategy in the data center. But the importance of adopting the RJ45 and copper cabling increases as technology matures.

Moore’s Law applies to 10GBASE-T. Today’s 10GBASE-T transceivers will be improved by successive generations of process shrinks and innovations, with corresponding decreases in power and increases in density. The analysis above and its savings will be eclipsed as 10GBASE-T drops in power to below what SFP+ optical, or even direct-attach copper, can achieve.

Benefits that come from scaling will be augmented with important standards advances such as Energy Efficient Ethernet, or IEEE802.3az. This standard strives to reduce link power during periods of reduced demand, lowering both link power, as well as systems power.

10GBASE-T builds on four generations of twisted pair copper technology, from 10 megabit to 10 gigabit. In each generation copper interconnect has dominated. The challenges may change, but many factors favor 10GBASE-T, which now include being green.

Friday, January 23, 2009

Selecting the Proper Antenna

Antenna Gain

The ability of the antenna to shape the signal and focus it in a particular direction is called "antenna gain" and is expressed in terms of how much stronger the signal in the desired direction is, compared to the worst possible antenna, which distributes the signal evenly in all directions (an "isotropic radiator"). To express the relationship to the isotropic reference, this is abbreviated dBi. The typical omni-directional "stick" antenna is rated at 6-8 dBi, indicating that by redirecting the signal that would have gone straight up or down to the horizontal level, 4 times as much signal is available horizontally. A parabolic reflector design can easily achieve 24 dBi.

The antenna gain factor applies to the received signal as well as to the transmitted signal. By focusing the incoming signal from a particular direction onto the radiating element, the antenna also shields the receiver from interference from noise sources outside of the amplified angle.

Point-to-Point Applications

For point-to-point applications, you generally want to use high-gain directional antennas. The tight beam gives you better signal strength, and it also helps lock out potential sources of noise and interference in the environment.
Remember to adjust your transmit power to comply with FCC regulations in the 2.4GHz band: With a 24 dBi antenna, the maximum transmit power in the USA is 24 dBm. (In the 900 MHz band, the limit is 36 dBm EIRP, so with a 24 dBi antenna, max output power is 12 dBm.) A 24 dBi parabolic grid antenna has a beam width of about 10 degrees both horizontally and vertically. Align the beam carefully, and make sure that the mast does not sway more than 4-5 degrees under maximum wind load.

Multi-Point Applications

Multipoint systems have a hub node, and a number of subscriber nodes. Each of the subscriber nodes communicates directly only with its hub, so select directional antennas as for a point-to-point application (see above). For the hub of a multipoint application, the picture is much more complicated. The hub must have a beam open enough to encompass all the subscriber nodes. In most cases, this means an omnidirectional or a sector antenna, which needs to be mounted at an elevated point. It is tempting to select the highest possible gain antenna you can find, but if you are in hilly terrain, that may not be the best solution.
Omnidirectional antennas achieve a high gain by shaping their beam to a flat disk. The higher the gain, the flatter the disk. A 6 dBi omni antenna may have a vertical beamwidth of 16 degrees, but a 10 dBi omni is typically only about 8 degrees, and a 12 dBi omni is only 4 degrees. If the antenna is mounted on a tower in a valley, subscribers on a hillside looking down on the tower may be outside the beam. (This is exacerbated by the fact that the antenna designer typically expects the antenna to be on a tower above the subscribers and therefore may have tilted the beam down, shaping it like a flat cone.)

In a mountainous area, the best location for the hub is often on a mountaintop to one side of the coverage area, with a low-gain directional antenna such as a 12 dBi Yagi which is likely to have an beamwidth of about 45 degrees both vertically and horizontally. Panel antennas of similar beam shape and gain are also readily available.

Inexperienced system installers often design for the nodes farthest out, and assume that subscriber nodes at shorter distances will work. "They may be outside the core of the beam, but they have less loss due to distance, and that will make up for it." This is not true. Outside the main beam, signal strength does not decrease evenly towards the backside. Rather, the edge of the beam consists of a complex pattern of side lobes with nulls (areas of no signal whatsoever) in between. Usually, the first null is at an angle twice as far from the center of the beam as the 3 dB dropoff point normally counted as the edge of the beam.

How to Read a Specification Sheet

Because there are many tradeoffs between different performance parameters, it is useful to review the manufacturer's specification sheet before committing to an antenna. The following are some of the data you will find:

Frequency Range - The frequency range that the manufacturer specifies for the antenna is typically larger than the band you intend to operate in. Make sure that the stated specifications are valid for the entire listed range, or at least in the part of it that you will be using. Watch out for a footnote that the stated values are "typical mid band values" unless the stated range is MUCH wider than the band you will be using.

Beamwidth - Two times the angle of deviation from the center of the beam where the signal strength drops 3 dB below the peak value. The higher the gain the narrower the angle.

Gain - The signal level measured in the direction at which it is strongest.
Front/Back ratio - How well does the antenna suppress signal from sidelobes on the back of the antenna. A high front/back ratio is important for sites with multiple antennas.

Cross polarization discrimination - How well can you separate signals at the same frequency with opposite polarizations?
Rated wind velocity/Horizontal thrust at rated wind - Make sure your mounting hardware will handle the load!

Recent Posts