Thursday, February 26, 2009

Native PCI Express I/O Virtualization in the Data Center

Written by Marek Piekarski

I/O virtualization based on PCI Express® – the standard I/O interconnect in servers today – is an emerging technology that has the capability to address the key issues which limit the growth of the data center today: power and manageability.

Data Centers and Commodity Servers
Commodity servers today trace their ancestry – and unfortunately their architecture – back to the humble personal computer (PC) of the early 1980s. A quarter of century later the ubiquity of the PC has changed the shape of enterprise computing – volume servers today are effectively PCs, albeit with far more powerful CPUs, memory and I/O devices. We now have the acquisition cost and scalability advantages that have come with the high volumes of the PC market, but the business demands on enterprise servers remain much the same as they were in terms of reliability, storage capacity and bandwidth, networking and connectivity – demands that a PC was never intended to address.

Over the last decade the demands for increased performance have been answered by simply providing more and more hardware, but now this trend is proving to no longer be sustainable. In particular, power and management have become the dominant costs of the data center. More hardware is no longer the solution that is needed for growth.

Server architecture – what is I/O?

I/O can be defined as all the components and capabilities which provide the CPU – and ultimately the business application – with data from the outside world, and allow it to communicate with other computers, storage and clients.

The I/O in a typical server consists of the Ethernet network adaptors (NICs), which allow it to communicate with clients and other computers – networked storage adaptors (HBAs), which provide connectivity into shared storage pools, – and local disk storage (DAS) for non-volatile storage of local data, operating systems (OSs) and server “state”. I/O also includes all the cables and networking infrastructure required to interconnect the many servers in a typical data center. Each server has its own private set of I/O components. I/O today can account for as much as half the cost of the server hardware.

Fig 1: Server I/O

I/O Virtualization

Data centers in recent years have been turning to a variety of “virtualization” technologies to ensure that their capital assets are used efficiently. Virtualization is the concept of separating a “function” from the underlying physical hardware. This allows the physical hardware to be pooled and shared across multiple applications, increasing its utilization and its capital efficiency, while maintaining the standard execution model for applications.

Virtualization consists of three distinct steps: Separation of resources – providing management independence; Consolidation into pools – increasing the utilization, saving cost, power and space; Virtualization – emulating the original functions as “virtual” functions to minimize software disruption;

I/O Virtualization (IOV) follows the same concept. Instead of providing each server with dedicated adaptors, cables, network ports and disks, IOV separates the physical I/O from the servers, leaving them as highly compact and space efficient pure compute resources such as 1U servers or server blades.

Fig 2: CPU-I/O Separation

The physical I/O from multiple servers can now be consolidated into an “IOV Appliance”.
Because the I/O components are now shared across many servers, they can be better utilized, and the number of components is significantly reduced when compared to a non-virtualized system. The system becomes more cost, space and power efficient, more reliable, and easier to manage.

Fig 3: I/O Consolidation

The final step is to create “virtual” I/O devices in the servers which look to the server software exactly the same as the original physical I/O devices. This functional transparency preserves the end-users’ huge investment in software: applications, OSs, drivers and management tools.

Fig 4: I/O Virtualization

I/O Virtualization Approaches for Commodity Servers

I/O Virtualization is not new. Like many technologies new to the PC and volume server, it has been in mainframes and high-end servers for many years. Its values are well understood. The challenge has been to bring those values to the high-volume, low-cost commodity server market at an appropriate price point, while not requiring major disruption to end users’ software, processes and infrastructure.

A number of companies have, over recent years, introduced products delivering I/O virtualization based on Infiniband. Although they have delivered many of the advantages of IOV – particularly in data centers which already use Infiniband – their use of Infiniband has limited their attractiveness to the broader market. The cost, complexity, and disruption of introducing new Infiniband software, networks and processes have negated the value of IOV.

The default I/O interconnect in volume servers is PCI Express. The PCI-SIG has recently defined a number of extensions to PCI Express to support I/O virtualization capabilities both within a single server (SingleRoot-IOV) and across multiple servers (MultiRoot-IOV). However, these extensions are not fully transparent with respect to standard PCI Express and require new modified I/O devices and drivers. The requirement for an “ecosystem of components” means that it is likely to be some years before we see MR-IOV, in particular, as a standard capability in a significant range of I/O devices.

Another approach is to virtualize standard PCI Express I/O devices and drivers available in volume today by adding the virtualization capability into the PCI Express fabric rather than into the devices. This has the advantage of exploiting the existing standard hardware and software and being extremely transparent and non-disruptive. Because the virtualization capability is contained in the PCI Express fabric, neither the I/O device nor any of the servers’ software, firmware or hardware needs to change. VirtenSys calls this new approach “Native PCIe Virtualization”.

Fig 5: Comparison of Infiniband IOV, PCI MR-IOV and Native PCIe IOV

Key Features and Benefits of Native PCIe IOV

Hardware cost reduction through consolidation IOV reduces hardware cost by improving on the poor utilization of I/O in most servers today. Native PCIe Virtualization contributes to this cost saving by reusing the existing high volume, low cost PCIe components and by adding very little in the way of new components.

Power reduction Increasing the I/O utilization through consolidation not only minimizes acquisition cost, but also the amount of I/O hardware required and hence the power dissipation of the data center.

Management simplification I/O virtualization changes server configuration from a hands-on, lights-on manual operation involving installation of adaptors, cables and switches to a software operation suitable for remote or automated management. By removing humans from the data center and providing automated validation of configuration changes, data center availability is enhanced. It is estimated that 40 percent of data center outages are due to “human error”.
Dynamic configuration – agility Businesses today need to adapt quickly to change if they wish to prosper. Their IT infrastructure also needs to be agile to support rapidly changing workloads and new applications. I/O virtualization allows servers to be dynamically configured to meet the processing, storage and I/O requirements of new applications in seconds rather than days.

Ease of deployment and non-disruptive integration

Native PCIe IOV technology has been designed specifically to avoid any disruption of existing software, hardware or operational models in data centers. Native PCIe IOV works with – and is invisible to – existing volume servers, I/O adaptors, management tools, OSs and drivers, making its deployment in the data center extremely straightforward.

Rapid and cost effective adoption of new CPU and I/O technologies CPU and I/O technologies have been evolving at different rates. New, more powerful and cost/power effective CPUs typically appear every nine months while new I/O technology generations come only every three – five years. In particular the “performance-per-watt” of new CPUs is significantly higher than those of a few years ago. The separation of I/O from the compute resources in servers (CPU and memory) allows new power efficient CPUs to be introduced quickly without disrupting the I/O subsystems. Similarly, new I/O technologies can be introduced as soon as they are available. Since these new high-cost and high-performance I/O adaptors are shared across multiple servers, their introduction cost can be significantly smoothed when compared with today’s deployment model.


I/O virtualization is an innovation that allows I/O to be separated, consolidated and virtualized away from the physical confines of a server enclosure.

Of the various approaches described, Infiniband-based IOV is most suitable for installations which already have an Infiniband infrastructure and whose servers already use Infiniband software. For the majority of data centers without Infiniband, IOV based in the standard I/O interconnect, PCI Express, provides a much more acceptable, low- power, low-cost solution. In particular, Native PCIe Virtualization provides today all the benefits of IOV without requiring new I/O devices, drivers, server hardware and software.

VirtenSys I/O Virtualization Switches improve I/O utilization to greater than 80 percent, enhance throughput, and reduce I/O cost and power consumption by more than 60 percent. The products also enhance and simplify data center management by dynamically allocating, sharing, and migrating I/O resources among servers without physical re-configuration or human intervention, dramatically reducing Operational Expense (OpEx).


Recent Posts