24
Sun, Nov
1 New Articles

Welcome to the Virtual Machine

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The System i has been at the forefront of virtualization for a long time. Now it's time for the platform to adopt some new virtualization tricks.

 

You use virtualization every day. One of the core concepts of the System i is virtualization: the use of a simulated CPU, implemented as the TIMI interface, originated in the System/38. Other virtualization and associated concepts, such as single-level storage, have formed the platform from its conception to the present and will continue to shape it.

 

As the platform has matured, expanded, and adapted to influences and requirements from outside technologies, so have its virtualization capabilities. Now in the past few years, virtualization, previously the mainstay of midrange and large systems, has fast become a common sight in the consumer market. The aggressive adoption of virtualization on x86 is driving innovation in today's virtualization arena. New virtualization features that are now being created for x86 will in the future also become tools of the System i administrator.

 

To understand what that future might hold for your System i environment, let's take a look at some basic facts about virtualization.

What Is the Matrix? Control.

Virtualization has two main functions. The first of these is to control the use of computing resources. This resource can be a disk drive, a communications I/O device, an operating system image, or a whole PC with hardware and OS. For each of those resources, virtualization can allow or disallow the usage of the resource, making it either available or unavailable to a requestor; or, instead of providing only a binary choice, it can throttle the usage to some degree. Limiting resource usage is not an end to itself, of course. Resource limiting is employed in order to permit sharing of resources between other resources, between processes or users simultaneously, or over time.

 

The other main function of virtualization is to provide an alternative, customizable interface to the virtualized resource. For instance, your real, unvirtualized processor may have a mixture of 16-bit and 32-bit registers. By virtualizing it, you can add a new control layer that allows programming in 16-bit only, in 128-bit, or in 3-bit, if you feel like it. Or you can go away from bits and do your programming in a higher-level language. This added control layer is the new interface, and the important thing is that it can be tailored to your needs. Typically, the new interface reduces complexity over the original interface. Think higher-level languages versus Assembler code, or Assembler code versus machine code, and you get the picture.

 

In an unvirtualized resource, the desired functionality (computing, I/O, or storage) and its implementation are strongly intertwined. The unvirtualized interface to a 16-bit processor must by necessity use 16-bit data. The virtualized interface can use something else. In order for the virtualized interface to serve its purpose, it must isolate a desired functionality from other particularities of the functionality's implementation.

Functionality Is Money

Why do we use virtualization at all? After all, virtualization comes with two main drawbacks:

  • Virtualization creates resource overhead, reducing performance.
  • Virtualization adds another interface layer, which increases overall system complexity, increasing administration overhead.

These disadvantages, however, are accepted because they are outweighed by the positive effects of virtualization. These are the benefits:

  • Simplified administration
  • Availability
  • Efficiency

Virtualization is used for the sole reason that it helps IT fulfill its goal: to provide computing functionality. Virtualization provides improved functionality. And in these times of ever-narrower budgets and ever-increasing focus on the business side of IT, IT management is delighted to find that, in virtualization, a method exists for getting more bang out of the buck. Functionality is money.

 

Simplified Administration

Virtualization can make many resources of the same type appear as a single resource. This is a direct outcome of the fact that virtualization allows you to define your own interface to a resource, which can also be an interface to multiple resources of the same type. Thus, for instance, a whole disk array can be treated as if it were a single, very large disk. Not only does it make it easier for the user to address content in the disk array, it also can make it easier for the operator to manage the disk array.

 

This benefit is in part counteracted by the complexity added by the additional interface layer. In a nutshell, there are two sides to virtualization: the time-saving side and the time-eating side. In practice, the more time an operator spends with the "simple" (virtual) interface, the more time-saving it is for him. Conversely, the more time he spends on working with both interfaces (e.g., initially setting up a virtualization solution), the more complex his work becomes.

 

Availability

By hiding a number of individual resources under the cloak of a single virtual resource, virtualization allows for the virtual resource to be available even as some of its underlying component resources become unavailable. This allows maintenance operations to be performed without loss of functionality. To stick with the example of the hard-disk array, virtualization enables such wonderful things as hot-swapping a broken disk. Or, if disk usage is increasing limits, a virtualized disk array allows for the "hot" addition of new disks to increase disk space.

 

On a higher level, modern LPAR partitioning allows for individual partitions of a system to be shut down and started independently of one another. In this manner, the frequency of IPLs for OS updates PTF installation can be reduced. That means that the impact on functionality for that system is controlled, as opposed to the whole physical system's functionality becoming unavailable.

 

Efficiency

Virtualization always causes a system to lose performance. This is simply due to the additional layer that it adds to perform the above-mentioned functions of resource control and interface provision.

 

However, virtualization can be used and in fact is increasingly used to improve the efficiency of complex IT installations. The basic idea is to get rid of resource "slack," or unused capacity. For instance, several blade servers can share the same network card because in all but the most exotic situations they would not each fully utilize the card's bandwidth at the same time. And x86 servers, which are notoriously underutilized, can be consolidated onto a single System x machine with virtualization employed, squeezing additional percent points of CPU out of the system.

 

This consolidation aspect has received increased attention of late, due to the current focus on green computing. If virtualization helps reduce slack, it helps cut energy usage and thus reduces the load on the economy (and pocketbooks).

I've Seen the Future, and It's Groovy

Like any other platform, the System i must "go where the money is." That means that if there is a functional benefit in any new technology, the System i will eventually adopt that technology, unless it would require a change to fundamental features of the platform (i.e., the System i will not suddenly switch to a different processor family just because another processor runs faster). By taking a look at virtualization technologies that are already in use and successful on other platforms, it is possible to make some educated guesses about the future of virtualization on the System i platform. The following are some predictions on the changes that are most likely to impact the working life of System i administrators.

 

The Virtual Is the Concrete

The Model 570 sport a nice new piece of hardware that is dedicated to a virtualization feature. Supported from V5R4M5 onward, this Integrated Virtual Ethernet adapter (or IVE for short) will act as a whole number of Ethernet cards and will of course be sharable between partitions. As IBM staff so nicely put it at a recent IBM Breakfast Meeting, it's "virtualization that, finally, you can touch." In the x86 arena, there is VMware's ESX 3i initiative, where the virtualization layer runs in dedicated embedded processors; and Microsoft has similar plans for running its Hyper-V virtualization solution on embedded processors. Expect to see more hardware dedicated to virtualization on System i.

 

Waves of Complexity

A smart man once observed that technological innovation seems to follow a triple jump pattern from the primitive, to the complex, to the simple. Now a long-lived platform such as the System i experiences many technological innovations. So it goes for the System i. A new technology will be introduced. As more and more features are added on, the complexity of the technology and of its management rises. Eventually, IBM will come up with an interface that hides some of that complexity. Viewed on a broader time-scale, administrative complexity undergoes a see-saw, or wave, pattern: up first, then down again.

 

By example, cluster technology has been available for the System i since OS/400 V4R4. Clustering is a combination of system-level and storage-level virtualization techniques, basically increasing functionality availability by abstracting from a number of System i's and either their internal disks or external storage arrays such as TotalStorage. It is not exactly the simplest of technologies to implement. This is the "up" part of the wave pattern.

 

Because of the complexity (and hence cost) of administrating a clustering environment, clustering is generally viewed as something that is apt only for large companies. In order to reduce that complexity, and thus increase appeal with the SMB shops, IBM released, as part of IBM i 6.1, a new product called the High Availability Solution Manager (System i PowerHA for i). That product is the successor to Copy Services for System i Toolkit, which you could formerly obtain only from Global Services as part of a consulting agreement. The High Availability Solution Manager is targeted at lowering the complexity of managing a clustering environment, signifying the "down" part of the pattern.

 

Data on the Go

For reasons of high availability, it will become more commonplace even for small businesses to store their data on redundant servers in multiple locations. They may be using IBM solutions for this, employing Copy Services such as Metro Mirror/Global Mirror, or maybe third parties will move into that market. Spreading data across not only multiple pieces of hardware, but across multiple geographical locations, is the way of the future.

 

We may see the rise of distributed file systems on the System i. Such file systems are abstractions of massive banks of disk drives in multiple locations. Google's GFS and Amazon's Dynamo are examples of such distributed file systems. It may even be the case that in the future, storage "in the cloud" will be a viable alternative to owned storage for the System i. Storage in the cloud means that you hire storage capacity from a third party and access it, using encryption, over the Internet. For the end-user market, such services already exist. IBM is doing research there along with its Cloud computing initiative.

 

On a side note, storage is going to be more cross-platform in the future. IBM i 6.1 allows an an i partition to utilize storage that is hosted by a Linux partition. And storage hosted by IBM i will be accessible to VMware running on x86. Expect storage to become less attached to the particular hardware (instance and platform); it will become a matter of interfaces and of abstract qualities such as performance and reliability.

 

Smarter Scheduling

It will become increasingly important for system administrators to plan the time and location of jobs so they can run best, impact other jobs the least, or fulfill other goals. Job planning across systems and locations will become more and more commonplace. This is essentially virtualized job scheduling, with the admin no longer specifying an individual job queue, but a number of constraints, which the actual scheduler then uses to decide on the best implementation (system, job queue, run-time attributes). This type of job scheduling already exists, but is still far from being used to its full extent. The job scheduler of the future will take into account things such as the current and projected energy consumption and thermal situation of a system; follow-the-sun schemes to tie job execution to the presence of staff (end-users or operators) in locations around the world; and the availability of resources available on individual partitions and physical systems. All of this will become automated to a high degree. As said above, IT management is concerned with functionality. It is not concerned with the scheduling of individual jobs. Job schedulers in the future will do a better job of helping you to bridge that gap.

 

Application Mobility: Online

Job scheduling traditionally defines the conditions under which a job is started, or more exactly, queued. Once the job is in the queue, the scheduler goes on to do some other business. But what if you wanted a job to start on one system, but to finish executing on another system?

 

Well, if your OS were Solaris or AIX 6, or if you were employing VMware's VMotion, the answer is that you would move the job to another system. While it is running. Without stopping it. Without an interruption in service to the end-user. How great is that?

 

This feature, called Live Application Mobility in AIX 6, lends itself naturally to follow-the-sun setups. It is also great for data migration, where you just move a running application off of the old server and onto the new server, all without any downtime from the end-user perspective. Maintenance, likewise. This feature seems so immediately useful that you can expect it to be applied to IBM i in a future release. As a matter of fact, IBM's Jim Herring, who is in charge of the "big" System i boxes, IBM i (aka i5/OS), and the "big" System p boxes, recently stated let on that IBM is, well, "very much considering" enabling live partition mobility on the System i.

 

Application Mobility: Offline

How about stopping an application/partition in its tracks, only to revive it a day, a week, or a year later on the same or another system? Since scientists already succeeded at stopping light and then bringing it back, doing the same for a running application on the System i should be a snap! Well, it is not, but that is not to say it is impossible. Storing the total state of an application, with all of its data, could be useful in many situations. The same applies to storing the state of a complete LPAR partition. On x86, this kind of freeze-drying a virtual system is all the rage and is called "snapshotting." It has applications for high availability, for testing, and for situations where more-urgent work requires that a less essential application be put aside for some time. Expect to see that arrive on System i in the next five years, max.

 

There may also be times when you want to turn a stored virtual configuration into the configuration of a physical system, a topic on which MC Press has featured an article called "System Protection: The Key to Virtualization".

 

Study, Study, Study

The world is moving fast for IT administrators, too. One thing that follows from all the fancy new virtualization features is that you will need to do a lot of learning to keep up. Management wants to get the most out of IT? Get training. It will pay.

Operating in a Virtual World

You can teach an old dog new tricks. The System i platform has been at the forefront of virtualization for a long time. Now it is time for the System i to adopt some new virtualization tricks. Technological innovation on other platforms is creating new possibilities in the virtualization landscape, simultaneously rendering it more complex. For you, whose job it is to keep a System i environment up and efficient, new virtualization features will mainly be one thing: new tools to get the job done. This article was aimed to introduce some of the new perspectives and technologies that will have an impact for the System i (or Power Systems) platform. And when you do see that announcement for partition mobility in IBM i 6.3, or for energy-aware all-automated schedulers, you will be able to say: "That's soooo old hat! I read about that back in 2008!

Kurt Thomas

Kurt Thomas is a System Engineer for CCSS, a provider of flexible, affordable solutions to monitor performance and manage messages on IBM i. Based in Bonn, Germany, Kurt works with businesses worldwide to realize the potential of their IBM i environments. This email address is being protected from spambots. You need JavaScript enabled to view it..

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: