The System i has been at the forefront of virtualization for a long time. Now it's time for the platform to adopt some new virtualization tricks.
You use virtualization every day. One of the core concepts of the System i is virtualization: the use of a simulated CPU, implemented as the TIMI interface, originated in the System/38. Other virtualization and associated concepts, such as single-level storage, have formed the platform from its conception to the present and will continue to shape it.
As the platform has matured, expanded, and adapted to influences and requirements from outside technologies, so have its virtualization capabilities. Now in the past few years, virtualization, previously the mainstay of midrange and large systems, has fast become a common sight in the consumer market. The aggressive adoption of virtualization on x86 is driving innovation in today's virtualization arena. New virtualization features that are now being created for x86 will in the future also become tools of the System i administrator.
To understand what that future might hold for your System i environment, let's take a look at some basic facts about virtualization.
What Is the Matrix? Control.
Virtualization has two main functions. The first of these is to control the use of computing resources. This resource can be a disk drive, a communications I/O device, an operating system image, or a whole PC with hardware and OS. For each of those resources, virtualization can allow or disallow the usage of the resource, making it either available or unavailable to a requestor; or, instead of providing only a binary choice, it can throttle the usage to some degree. Limiting resource usage is not an end to itself, of course. Resource limiting is employed in order to permit sharing of resources between other resources, between processes or users simultaneously, or over time.
The other main function of virtualization is to provide an alternative, customizable interface to the virtualized resource. For instance, your real, unvirtualized processor may have a mixture of 16-bit and 32-bit registers. By virtualizing it, you can add a new control layer that allows programming in 16-bit only, in 128-bit, or in 3-bit, if you feel like it. Or you can go away from bits and do your programming in a higher-level language. This added control layer is the new interface, and the important thing is that it can be tailored to your needs. Typically, the new interface reduces complexity over the original interface. Think higher-level languages versus Assembler code, or Assembler code versus machine code, and you get the picture.
In an unvirtualized resource, the desired functionality (computing, I/O, or storage) and its implementation are strongly intertwined. The unvirtualized interface to a 16-bit processor must by necessity use 16-bit data. The virtualized interface can use something else. In order for the virtualized interface to serve its purpose, it must isolate a desired functionality from other particularities of the functionality's implementation.
Functionality Is Money
Why do we use virtualization at all? After all, virtualization comes with two main drawbacks:
- Virtualization creates resource overhead, reducing performance.
- Virtualization adds another interface layer, which increases overall system complexity, increasing administration overhead.
These disadvantages, however, are accepted because they are outweighed by the positive effects of virtualization. These are the benefits:
- Simplified administration
- Availability
- Efficiency
Virtualization is used for the sole reason that it helps IT fulfill its goal: to provide computing functionality. Virtualization provides improved functionality. And in these times of ever-narrower budgets and ever-increasing focus on the business side of IT, IT management is delighted to find that, in virtualization, a method exists for getting more bang out of the buck. Functionality is money.
Simplified Administration
Virtualization can make many resources of the same type appear as a single resource. This is a direct outcome of the fact that virtualization allows you to define your own interface to a resource, which can also be an interface to multiple resources of the same type. Thus, for instance, a whole disk array can be treated as if it were a single, very large disk. Not only does it make it easier for the user to address content in the disk array, it also can make it easier for the operator to manage the disk array.
This benefit is in part counteracted by the complexity added by the additional interface layer. In a nutshell, there are two sides to virtualization: the time-saving side and the time-eating side. In practice, the more time an operator spends with the "simple" (virtual) interface, the more time-saving it is for him. Conversely, the more time he spends on working with both interfaces (e.g., initially setting up a virtualization solution), the more complex his work becomes.
Availability
By hiding a number of individual resources under the cloak of a single virtual resource, virtualization allows for the virtual resource to be available even as some of its underlying component resources become unavailable. This allows maintenance operations to be performed without loss of functionality. To stick with the example of the hard-disk array, virtualization enables such wonderful things as hot-swapping a broken disk. Or, if disk usage is increasing limits, a virtualized disk array allows for the "hot" addition of new disks to increase disk space.
On a higher level, modern LPAR partitioning allows for individual partitions of a system to be shut down and started independently of one another. In this manner, the frequency of IPLs for OS updates PTF installation can be reduced. That means that the impact on functionality for that system is controlled, as opposed to the whole physical system's functionality becoming unavailable.
Efficiency
Virtualization always causes a system to lose performance. This is simply due to the additional layer that it adds to perform the above-mentioned functions of resource control and interface provision.
However, virtualization can be used and in fact is increasingly used to improve the efficiency of complex IT installations. The basic idea is to get rid of resource "slack," or unused capacity. For instance, several blade servers can share the same network card because in all but the most exotic situations they would not each fully utilize the card's bandwidth at the same time. And x86 servers, which are notoriously underutilized, can be consolidated onto a single System x machine with virtualization employed, squeezing additional percent points of CPU out of the system.
This consolidation aspect has received increased attention of late, due to the current focus on green computing. If virtualization helps reduce slack, it helps cut energy usage and thus reduces the load on the economy (and pocketbooks).
I've Seen the Future, and It's Groovy
Like any other platform, the System i must "go where the money is." That means that if there is a functional benefit in any new technology, the System i will eventually adopt that technology, unless it would require a change to fundamental features of the platform (i.e., the System i will not suddenly switch to a different processor family just because another processor runs faster). By taking a look at virtualization technologies that are already in use and successful on other platforms, it is possible to make some educated guesses about the future of virtualization on the System i platform. The following are some predictions on the changes that are most likely to impact the working life of System i administrators.
The Virtual Is the Concrete
The Model 570 sport a nice new piece of hardware that is dedicated to a virtualization feature. Supported from V5R4M5 onward, this Integrated Virtual Ethernet adapter (or IVE for short) will act as a whole number of Ethernet cards and will of course be sharable between partitions. As IBM staff so nicely put it at a recent IBM Breakfast Meeting, it's "virtualization that, finally, you can touch." In the x86 arena, there is VMware's ESX 3i initiative, where the virtualization layer runs in dedicated embedded processors; and Microsoft has similar plans for running its Hyper-V virtualization solution on embedded processors. Expect to see more hardware dedicated to virtualization on System i.
Waves of Complexity
A smart man once observed that technological innovation seems to follow a triple jump pattern from the primitive, to the complex, to the simple. Now a long-lived platform such as the System i experiences many technological innovations. So it goes for the System i. A new technology will be introduced. As more and more features are added on, the complexity of the technology and of its management rises. Eventually, IBM will come up with an interface that hides some of that complexity. Viewed on a broader time-scale, administrative complexity undergoes a see-saw, or wave, pattern: up first, then down again.
By example, cluster technology has been available for the System i since OS/400 V4R4. Clustering is a combination of system-level and storage-level virtualization techniques, basically increasing functionality availability by abstracting from a number of System i's and either their internal disks or external storage arrays such as TotalStorage. It is not exactly the simplest of technologies to implement. This is the "up" part of the wave pattern.
Because of the complexity (and hence cost) of administrating a clustering environment, clustering is generally viewed as something that is apt only for large companies. In order to reduce that complexity, and thus increase appeal with the SMB shops, IBM released, as part of IBM i 6.1, a new product called the High Availability Solution Manager (System i PowerHA for i). That product is the successor to Copy Services for System i Toolkit, which you could formerly obtain only from Global Services as part of a consulting agreement. The High Availability Solution Manager is targeted at lowering the complexity of managing a clustering environment, signifying the "down" part of the pattern.
Data on the Go
For reasons of high availability, it will become more commonplace even for small businesses to store their data on redundant servers in multiple locations. They may be using IBM solutions for this, employing Copy Services such as Metro Mirror/Global Mirror, or maybe third parties will move into that market. Spreading data across not only multiple pieces of hardware, but across multiple geographical locations, is the way of the future.
We may see the rise of distributed file systems on the System i. Such file systems are abstractions of massive banks of disk drives in multiple locations. Google's GFS and Amazon's Dynamo are examples of such distributed file systems. It may even be the case that in the future, storage "in the cloud" will be a viable alternative to owned storage for the System i. Storage in the cloud means that you hire storage capacity from a third party and access it, using encryption, over the Internet. For the end-user market, such services already exist. IBM is doing research there along with its Cloud computing initiative.
On a side note, storage is going to be more cross-platform in the future. IBM i 6.1 allows an an i partition to utilize storage that is hosted by a Linux partition. And storage hosted by IBM i will be accessible to VMware running on x86. Expect storage to become less attached to the particular hardware (instance and platform); it will become a matter of interfaces and of abstract qualities such as performance and reliability.
Smarter Scheduling
It will become increasingly important for system administrators to plan the time and location of jobs so they can run best, impact other jobs the least, or fulfill other goals. Job planning across systems and locations will become more and more commonplace. This is essentially virtualized job scheduling, with the admin no longer specifying an individual job queue, but a number of constraints, which the actual scheduler then uses to decide on the best implementation (system, job queue, run-time attributes). This type of job scheduling already exists, but is still far from being used to its full extent. The job scheduler of the future will take into account things such as the current and projected energy consumption and thermal situation of a system; follow-the-sun schemes to tie job execution to the presence of staff (end-users or operators) in locations around the world; and the availability of resources available on individual partitions and physical systems. All of this will become automated to a high degree. As said above, IT management is concerned with functionality. It is not concerned with the scheduling of individual jobs. Job schedulers in the future will do a better job of helping you to bridge that gap.
Application Mobility: Online
Job scheduling traditionally defines the conditions under which a job is started, or more exactly, queued. Once the job is in the queue, the scheduler goes on to do some other business. But what if you wanted a job to start on one system, but to finish executing on another system?
Well, if your OS were Solaris or AIX 6, or if you were employing VMware's VMotion, the answer is that you would move the job to another system. While it is running. Without stopping it. Without an interruption in service to the end-user. How great is that?
This feature, called Live Application Mobility in AIX 6, lends itself naturally to follow-the-sun setups. It is also great for data migration, where you just move a running application off of the old server and onto the new server, all without any downtime from the end-user perspective. Maintenance, likewise. This feature seems so immediately useful that you can expect it to be applied to IBM i in a future release. As a matter of fact, IBM's Jim Herring, who is in charge of the "big" System i boxes, IBM i (aka i5/OS), and the "big" System p boxes, recently stated let on that IBM is, well, "very much considering" enabling live partition mobility on the System i.
Application Mobility: Offline
How about stopping an application/partition in its tracks, only to revive it a day, a week, or a year later on the same or another system? Since scientists already succeeded at stopping light and then bringing it back, doing the same for a running application on the System i should be a snap! Well, it is not, but that is not to say it is impossible. Storing the total state of an application, with all of its data, could be useful in many situations. The same applies to storing the state of a complete LPAR partition. On x86, this kind of freeze-drying a virtual system is all the rage and is called "snapshotting." It has applications for high availability, for testing, and for situations where more-urgent work requires that a less essential application be put aside for some time. Expect to see that arrive on System i in the next five years, max.
There may also be times when you want to turn a stored virtual configuration into the configuration of a physical system, a topic on which MC Press has featured an article called "System Protection: The Key to Virtualization".
Study, Study, Study
The world is moving fast for IT administrators, too. One thing that follows from all the fancy new virtualization features is that you will need to do a lot of learning to keep up. Management wants to get the most out of IT? Get training. It will pay.
Operating in a Virtual World
You can teach an old dog new tricks. The System i platform has been at the forefront of virtualization for a long time. Now it is time for the System i to adopt some new virtualization tricks. Technological innovation on other platforms is creating new possibilities in the virtualization landscape, simultaneously rendering it more complex. For you, whose job it is to keep a System i environment up and efficient, new virtualization features will mainly be one thing: new tools to get the job done. This article was aimed to introduce some of the new perspectives and technologies that will have an impact for the System i (or Power Systems) platform. And when you do see that announcement for partition mobility in IBM i 6.3, or for energy-aware all-automated schedulers, you will be able to say: "That's soooo old hat! I read about that back in 2008!
LATEST COMMENTS
MC Press Online