Server, storage, and network virtualization increase flexibility, mask complexity, and reduce costs for IT departments.
Virtualization is getting to be long in the tooth, at least in terms of the accelerated universe in which IT now operates. Nonetheless, the term is still somewhat confusing. That's not surprising because there isn't just one variety. Thus, when someone tries to talk to you about or sell you virtualization, your first question should be, "Which type?"
There are at least three forms of virtualization: server, storage, and network. And, considering IT vendors' occasional habit of morphing the meaning of terms to suit their offerings, there may already be additional ones and still more to come in the future.
Server Virtualization
For most IT professionals, the term "virtualization" normally conjures up notions of virtual servers. With this type of virtualization, a single physical server runs multiple virtual servers, each of which is logically isolated from the others. This isolation means that if one virtual server crashes, it usually won't bring down the other virtual servers running on the same hardware.
Depending on the virtualization technology, the different virtual servers might each run a different operating system. For example, IBM's PowerVM virtualization technologies can run multiple IBM i, AIX, and Linux virtual servers simultaneously on a single physical Power Systems server.
Numbers are hard to come by, but, despite the fact that many virtualization technologies can run different operating systems simultaneously, anecdotal evidence suggests this is not happening frequently, at least not in the Power Systems arena.
During an interview at the recent COMMON Annual Meeting and Exposition, Bill Hammond, a product-marketing director at Vision Solutions, confirmed that his company is seeing little cross-platform virtualization. He speculated that this is largely due to the fact that the IT pillars in organizations are still quite segregated, with each technology group asserting firm ownership over its own systems.
When they view it at a high level, IT veterans who have been around long enough to still use the name "AS/400" to refer to the platform that IBM now calls Power Systems may consider PowerVM to be nothing new. Many of the core features of PowerVM have been available through OS/400, i5/OS, and now IBM i LPARs for quite some time. However, PowerVM adds to and advances those features.
Consider these key PowerVM features:
- The PowerVM hypervisor supports multiple IBM i, AIX, and Linux virtual servers on a single system.
- Micro-partitioning makes it possible to run up to 10 partitions per processor core.
- Dynamic Logical Partitioning (DLPAR) allows processor, memory, and I/O resources to be moved dynamically between partitions, making it possible to quickly adapt to varying workloads on each virtual server. Individual resources can be dedicated to a partition, or they can be allocated to partitions with a granularity as fine as 1/100thof a processor.
- Multiple shared-processor pools (MSPPs) make it possible to automatically balance processor usage among partitions. This feature also allows you to cap the processor resources assigned to a group of partitions, which may allow you to reduce software licensing fees.
- The Virtual I/O Server (VIOS) virtualizes disk, tape, and Ethernet resources, thereby simplifying management and reducing system costs.
- The Integrated Virtualization Manager (IVM) is a point-and-click, browser-based tool that can be used to create and manage partitions. The IVM is included with all entry-level and midrange Power Systems. The Hardware Management Console (HMC) provides a wider feature set for creating and managing virtual systems. The HMC is packaged in an external tower or a rack-mounted server and can be used to provide a central point of control for virtual servers running multiple physical Power Systems servers.
It has been suggested that a future direction for IBM is to merge the capabilities of the IVM and the HMC, but IBM has neither announced a planned release date nor even officially confirmed this. - Live Partition Mobility allows active AIX and Linux partitions to be moved to another physical system without interrupting user activity. This feature can be used to eliminate downtime when upgrading or performing maintenance on a physical server. Live Partition Mobility is not currently available for IBM i partitions.
- AIX Workload Partitions (WPARs) allow running applications to move between Power Systems servers running AIX or AIX partitions.
- Active Memory Sharing optimizes memory usage by autonomously flowing memory resources from one partition to another as appropriate.
- With the introduction of IBM i 7.1, IBM i 6.1 can now host IBM i 7.1 partitions and vice versa.
Server Virtualization Pros
In today's environmentally conscious world, the "greenness" of server virtualization is often cited. Because a single, more powerful server typically consumes less electricity and requires less cooling, the carbon footprint of virtualized servers running on a single physical system is generally much smaller than the multiple physical servers that virtualization replaces.
It is an open question, but one wonders whether there would be as big a movement to go green in this way if doing so didn't also serve to reduce power costs. Cost reductions will become an increasingly strong argument in favor of server consolidation via virtualization as electricity prices rise.
In addition to lower power and cooling costs, the cost of a single higher-powered physical server running multiple virtual servers is normally considerably lower than the combined costs of the multiple pieces of hardware that it replaces.
The ability to quickly—even automatically—reallocate resources among virtual servers makes virtualization a good way to perform load balancing. Typically, workloads peak at different times for different applications. When each application is run on its own physical server, each of those servers must be sized to handle the full peak load of the application running on it. When, instead, those servers are virtualized on a single physical system, the physical server can be sized smaller than the sum of the peaks because the workload peaks of some applications will overlap the valleys of other applications.
Virtualization can also result in a reduction in labor costs because it is generally easier to manage, monitor, and secure a single physical server hosting multiple virtual servers than it is to perform those tasks on multiple physical servers.
Virtualization can also reduce the cost and the time required to implement new applications that require their own servers as it is not necessary to buy, install, and configure new hardware.
Server Virtualization Cons
Virtualization is not cost-free. The most obvious cost is software licensing for PowerVM, which is levied per core. The price of PowerVM depends on whether you buy the Express, Standard, or Enterprise edition.
Another thing to consider is that virtualization is yet one more technology that must be installed, configured, and managed. Consequently, the implementation of virtualization incurs learning costs. The required skills development is not overly burdensome, but it is a factor.
In one of his sessions at COMMON that discussed the interaction of virtualization and disaster recovery on the AIX platform, Ferenc Gyurcsan, a senior solutions consultant with Vision Solutions, mentioned a cost of virtualization that, despite being obvious when one stops to think about it, is often overlooked. Virtualization introduces a management and control layer between the operating system and a system's hardware. The amount of resources used by the virtualization layer, and by some of the virtualization options listed above, is typically very small relative to the resources employed by the other software running on a physical server. Nonetheless, if a physical server is sized parsimoniously, taking into account solely the processing loads of the individual servers virtualized on it, performance may not be adequate to satisfy user requirements. This might be an issue particularly in high-usage data centers or high-performance computing environments.
The benefits of virtualization usually greatly outweigh the cost of accommodating the processing requirements of the virtualization technology, but this cost must be taken into consideration nonetheless.
Gyurcsan mentioned another factor that organizations should consider. When they consolidate servers, they are putting all of their eggs in one basket. When each server runs on its own independent physical system, possibly geographically distant from the organization's other physical servers, if there is a hardware or software failure, a disaster strikes, or the server has to be shut down for maintenance, the organization will lose only the business functionality performed on that one server.
In contrast, after consolidating servers, if a planned or unplanned event brings down the physical server, all of the virtual servers running on it will stop running too. In that case, rather than just one subset of the business functionality stopping, the entire business will be brought to a halt.
Because of the greater role that the single physical server plays, it becomes significantly more important to consider options such as high availability and disaster recovery solutions to ensure that system downtime does not bring the business to its knees.
Storage Virtualization
The Storage Networking Industry Association (SNIA) defines storage virtualization as "The act of integrating one or more (back end) services or functions with additional (front end) functionality for the purpose of providing useful abstractions. Typically virtualization hides some of the back-end complexity, or adds or integrates new functionality with existing back end services. Examples of virtualization are the aggregation of multiple instances of a service into one virtualized service, or to add security to an otherwise insecure service. Virtualization can be nested or applied to multiple layers of a system."
The terms "Storage Area Network" (SAN) and "storage virtualization" are often used interchangeably, but they are not synonymous. A March 2006 white paper prepared by IDC and sponsored by Hitachi describes the differences this way: "A SAN uses a network to separate the connection between servers and storage from physical limitations, while storage virtualization creates a logical separation via software (including controller microcode). To put it another way, SAN integrates storage networks while virtualization integrates storage management."
In the Power Systems world, storage virtualization is achieved using a Virtual I/O Server (VIOS) that "owns" the physical storage devices, likely through a SAN. The VIOS is an independent server that runs in its own partition.
Storage Virtualization Pros
Flexibility, and the resulting opportunities for cost-savings, is one of the major benefits of storage virtualization. Because the storage hardware is abstracted, it appears as a single logical unit to application programs, no matter how complex and fragmented the underlying storage hardware may be. As a result, organizations are free to choose storage devices to suit the needs of their data and their budgets.
By providing a common interface for managing the whole SAN, storage virtualization can also simplify and, therefore, reduce the workload of managing an organization's storage.
Because virtualized storage looks like a single homogeneous unit to end-user applications, new storage devices can be easily added to the SAN to increase storage capacity without affecting user applications. This makes virtualized storage highly scalable.
Storage Virtualization Cons
During one of his sessions at COMMON, Gyurcsan stated that, like server virtualization, the result of storage virtualization may be that you place all of your eggs in one basket. Using a VIOS creates a single point of failure that can shut down access to all of your organization's data rather than just the data on a single physical disk drive.
One way to avoid this vulnerability is to configure multiple VIOSs. Then, a secondary VIOS can automatically take over when the primary VIOS is unavailable.
Figure 1 depicts two VIOSs attached to a single SAN. This protects against VIOS downtime, but it will not protect against a SAN failure or loss of data within the SAN. To eliminate this risk as well, you set up two separate SANs, each owned by a different VIOS. You can then use hardware-based mirroring or software-based replication to keep the two synchronized.
Figure 1: A Dual VIOS configuration looks like this.
(Source: Ferenc Gyurcsan, Vision Solutions)
Gyurcsan suggested that virtualized storage requires more thorough and proper change management practices across the IT organization with regards to storage and storage area network management in order to ensure that changes introduced in one environment do not inadvertently affect other environments.
Network Virtualization
Network virtualization abstracts physical network resources into a common logical networking platform that can be shared by different virtual servers. As with storage virtualization, network virtualization on IBM Power Systems is enabled by the VIOS. Rather than each virtual server owning its own physical network adapter(s), the network adapters are owned by the VIOS. Client LPARs then connect to the network through the VIOS. This can result in a significantly smaller I/O footprint and better resource utilization.
When configured using the Shared Ethernet Adapter Failover feature, the secondary VIOS can automatically take over when the primary VIOS is unavailable.
Because most of the pros and cons of virtualized storage are attributes of the VIOS, most of the storage virtualization pros and cons apply to network virtualization.
The Future of Virtualization
Predicting technology trends over the long term is invariably fraught with gross inaccuracies. However, it is almost certainly safe to say that, for all three types of virtualization, vendors will continue to make their technologies easier to implement and use. Future virtualization technology versions will likely also offer higher performance and more autonomous self-healing and self-tuning features. All of those advances will probably continue to reduce the total cost of ownership of virtualization technologies.
As companies become more comfortable with virtualization and the total cost of ownership falls further, virtualization will likely change the way organizations implement their IT solutions, and it will likely also make new businesses possible.
Cloud computing already benefits from virtualization. Both Hammond and Gyurcsan expect that those benefits will continue to increase in the future. They see virtualization as a way to quickly add new applications to the cloud, without having to immediately add new hardware. In addition, virtualization makes it much easier to perform load balancing in the cloud and to quickly scale applications and storage in the cloud.
Hammond also thinks that virtualization may make new business opportunities viable. For example, he sees an opportunity for more companies to sell hosted disaster recovery offerings.
Companies that provide disaster recovery sites own a fixed amount of hardware, or their clients may own the hardware and store it in the third-party site. Without virtualization, if multiple companies simultaneously declare a disaster—which is likely in the event of a major natural disaster—the last organizations to request support from their disaster recovery site vendor may find that they are out of luck.
Owning the hardware and paying to situate it at the third-party recover site gets around the problem of being unable to use the hardware if too many companies declare a disaster simultaneously. The company that owns the hardware has exclusive use of it. However, this comes at a price, which may be difficult to justify considering that disasters rarely occur.
In contrast, if the third-party's hardware runs virtualized servers, additional companies can be supported at the recovery site during an emergency by simply adding virtual servers to the available hardware. Customers might have to throttle back their user applications and forgo some non-essential applications during the emergency if the hardware does not have sufficient capacity, but they won't be shut down completely.
Hammond suggests that this will lower the barriers to entry for new recovery site vendors. They can start off small, buying only a minimal amount of hardware, while still being able to promise to serve multiple customers in the event of a disaster.
The state of virtualization is already at the point where every organization should at least evaluate it to consider whether it can reduce costs while increasing operational flexibility, availability, and functionality within the organization. And, as the technologies advance, virtualization will become yet more advantageous in the future.
LATEST COMMENTS
MC Press Online