Written by Trevor Dearing
For those of us who grew up praying in the temple of the mainframe, the concept of virtualization is nothing new. Maximizing resources by the use of virtual machines on a single platform has always made good sense. For years the economics of personal computing pushed us to a distributed model. The PC was cheap and if we could use all of the desktop resources we could avoid central processing.
Unfortunately this ideal was ruined by reality: the concept of so much information being distributed in an uncontrolled manner through an organization became a security nightmare. Equally, the cost of managing the applications and licenses across so many desktops was prohibitive. The development of web technology allowed us to return to a more controlled and centralized model.
Unfortunately most server farms are built from traditional PC technology on a one- application-to-one or –many-machine basis and this is wasteful of space, resource and power. Blade technology provides a good first step to solving this problem, enabling the consolidation of a number of individual servers into a smaller rack space and less power consumption. This provides many cost benefits as well as controlling the speed with which we need to extend or renew data centers. However longer term, new virtualization techniques will provide us with much better utilization and a reduction of space and power. This can either be implemented on individual severs, blade technology or more likely the new generation of super servers.
However there is much more to virtualization than just consolidation.
Virtualization delivers the capability to deploy, move, or clone an application from one platform to another over a network, even when it is running. Live migration of applications at this speed and scale demands new levels of performance, reliability, and standardization from networks. That’s why thoughtful planning of network architectures is the first step toward virtualization's full value.
Fortunately, virtualization's requirements are evolutionary - natural extensions of capabilities that networking solution providers have been improving for years. But large-scale virtualization initiatives should take a close look at their networks early in the planning process, to assure that they offer capabilities like these:
Link aggregation and virtual chassis - link aggregation, or trunking, bundles multiple links to deliver more bandwidth and higher availability. Long used as a cost-effective way to build internal Ethernet backbones, link aggregation is an attractive alternative to hardware replacement when a network needs bandwidth to meet new requirements.
Unfortunately, standard IEEE 802.3ad link aggregation won’t work unless ports reside on the same switch - a restriction that greatly complicates network topography and introduces delay, complexity, and risk. New network virtualization techniques like virtual chassis allow link aggregation between two switches, even at separate locations. The result is more bandwidth where it's needed, freed from the constraints of physical switch locations - an ideal complement to server virtualization.
Wire-rate high-density core switching - at the data-center core, server virtualization can raise demands on network bandwidth and latency. Wire-rate network performance allows processing of sustained and bursty traffic without dropped packets, avoiding TCP retransmissions that increase application latency.
Architecture counts most at the core, and dense wire-rate 10GbE ports can help weed out multiple layers of switching - in all but the largest enterprise networks, it can even eliminate the aggregation layer entirely. Simplification of the core cuts latency, complexity, and cost, and improves reliability: all key elements for a successful virtualization initiative.
Security without latency - virtualization providers have done an excellent job of addressing user concerns about security - most users now see virtual machines as no less secure than the physical machines on which they run. But live migration of virtual machines and the applications they carry creates new network security tradeoffs. Firewalls that protect sensitive network legs or sub-networks may introduce latencies that can cripple a running application on a virtual machine, even though they might be invisible to a physical server. And the risk of failure creates an incentive for removing protection, with obvious risks.
Here, there is simply no substitute for performance. Rather than play a dangerous game trying to balance availability against security to defer a hardware purchase, it's time to upgrade critical firewalls, focusing on latency and throughput metrics.
Network operating environment consistency - server administrators rarely think about the operating systems of network infrastructure - but they should learn more. Most data center networks today run between six to ten different network operating systems, adding complexity, inconsistency, and delay in qualifying new features.
Optimizing network performance for virtual environments is difficult enough without the challenge of a different operating system on every switch, router, VPN appliance, firewall, and more. When you standardize on a single operating system (not OS “family”) for network hardware, you’ll get faster project turnaround, better network performance, and more reliable operation of applications running in virtual environments.
Virtualization - and beyond
Virtualization is a great reason to upgrade the performance and reliability of corporate networks - but not the only one. Up-to-date, optimized networks deliver business benefits that not only support the latest technologies, but unlock your organization’s ability to:
- stay in the race - with networks that deliver basic IT services with utility-grade reliability, to support business users, satisfy regulators, and delight customers
- outpace the competition - with technologies that improve productivity, cut costs, and lock your competitors in a never-ending struggle just to keep up
- change the game - using innovative technologies to craft new services that redefine your competitive landscape
Your organization’s decision to adopt virtualization signals its intention to compete - and win - using the most advanced technology available. But even a powerful new approach like virtualization doesn’t perform in a vacuum. Careful consideration of the bandwidth, latency, security and consistency of your network environment will help you overcome hurdles and delays on the way to your virtualization goals - to create a network that supports your virtualization targets, maintains your quality-of-service and availability commitments, and exceeds the most demanding requirements of your business future.
0 comments:
Post a Comment