23
Sat, Nov
1 New Articles

Beating the Performance Curve

Commentary
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

For years, analysts and systems engineers have been stumped by a basic conflict: How do you maximize performance while sustaining increasing numbers of users and increasing quantities of data? Traditional performance curves show that, as you add more users or more data to any information system, different systems show different performance curves. Invariably, the more demand you add, the less performance you achieve.

During the 1980s and 1990s, the credo was to throw more hardware resources at diminishing performance curves. Even so, it was not uncommon for all IT computing resources to run at dreadful levels of capacity--between 50 and 60%--to vouchsafe sub-second response time. IT's rationale was simple: To sustain adequate performance--particularly in Windows-based server systems--it was cheaper to add more memory or disk than to try to resolve structural bottlenecks or fine-tune the processes of the operating system.

Performance and Capacity Advantages of Mini and Mainframe Systems

In comparison to PC servers, mainframe and mini-computers (like the AS/400), have traditionally had better track records for maintaining performance with lower levels of unused capacity. For instance, it's not uncommon to hear stories of iSeries systems providing sub-second response though the systems may be running at capacities higher than 80%. The reason is that the i5/OS operating system (and OS/400 before it) virtualizes the physical resources of the hardware so that elements such as storage, memory, and especially workloads can be fine-tuned and better managed.

Unfortunately, however, as PC server hardware prices dropped, it's traditionally been simpler for IT to implement increased numbers of turnkey PC servers, networked together, to attempt to resolve performance issues for Windows-based systems. In other words, instead of rallying behind systems that offered better performance management tools, most IT departments chose to implement the lower-priced systems.

Step by step, IT began to equate high levels of server performance with lower levels of capacity utilization and increased network and operating system complexity.

Typical Growth Patterns Toward Complexity

So how did this work? It's easy.

Consider the typical scenario of IT growth in a Midwestern manufacturing company. Its management chose to buy a new application for use by its production department. It was a Windows Server application that promised to fill the requirements of the department nicely. However, the turnkey package required that the application run on its own server hardware, with its own custom-tailored configuration. The salesman said the best performance would be achieved if there were plenty of memory and disk available. Memory and disk were cheap, so IT brought in the system.

Likewise, the product design department had a need for a Computer Aided Design (CAD) system, and it too needed its own piece of customized hardware running a different operating system.

Meanwhile the company's accounting department invested in an ERP system running on an i5 running i5/OS.

Department by department, the manufacturing company automated its workgroups, only to discover that--at the end of the automation process--it had created a complex network of servers and applications that, by design, were running at only 60% of overall potential capacity.

Complexity Complicates Performance

The problem with this kind of uncoordinated systems growth is that the inherent complexity of this network soon begins limiting IT's ability to sustain the performance of the overall information system.

For instance, as the company moves towards implementing e-business applications across the Internet, the IT environment will quickly become highly complex. More than likely, the information system will consist of some combination of routers, edge servers, Web servers, Web application servers, EJB servers, legacy transaction servers, and database servers, many of which may run on different hardware and operating systems.

How can IT ensure that the overall system is performing as expected in this type of multi-tiered environment? The answer is, IT can't. Unfortunately, basic questions about performance will remain unanswered:

  • Are work requests completing successfully? If not, where are they failing?
  • Are successful work requests completing within the expected response time? If not, where are the bottlenecks?
  • How many work requests were completed over some period of time compared to prior periods? Is the workload growing?
  • Are system-level resources being used for optimal performance? If not, can they be dynamically redirected to alleviate bottlenecks?

To accurately answer these questions, you must have the ability to do the following:

  • Identify work requests based on business priority.
  • Track the performance of work requests across server and subsystem boundaries.
  • Manage the underlying physical and network resources to achieve your specified performance goals.

But in a heterogeneous environment such as the one described above, it's not possible to treat all the components of the overall information system as a single resource. In fact, it's not even always possible to identify an individual work request or transaction.

Not so, of course, with the iSeries i5. But though the i5 and its predecessor iSeries and AS/400 systems have many good tools to answer these kinds of questions, today these systems are only a part of a larger complex of devices and platforms.

IBM's Virtualization Engine Suite

To address these performance issues, IBM began developing the concept of a Virtualization Engine Suite nearly four years ago. This suite of tools was designed to help organizations better manage the capacity and performance of their overall information systems.

Today, IBM's Virtualization Engine Suite for Servers is a pre-tested, multi-platform set of tools for a variety of server operating system environments. But these tools are not for everyone. They only make sense in the most complex heterogeneous environments.

The Virtualization Engine Suite is packaged in two different flavors: One for IBM operating systems like AIX and i5/OS, and one for Windows and Solaris operating systems. But, according to IBM, there are interchangeable components that represent the Virtualization Engine's architecture:

  • Enterprise Workload Manager (EWLM): This component enables the customer to automatically monitor and manage multi-tiered, distributed, heterogeneous or homogeneous workloads across an IT infrastructure to better achieve defined business goals for end-user services.
  • Systems Provisioning Capability: This component creates a virtual representation of pooled resources that are shared between different workloads. Systems provisioning is delivered by IBM Tivoli Provisioning Manager 2.1. This capability supports the separation of the physical view of resources from the logical view.
  • IBM Director Multiplatform: This component helps deliver a common, consistent, cross-platform systems management solution for IBM servers, storage, and operating systems. It provides a single administrative console for management tasks (operating system management, storage management, distributed systems management, and platform management), a common management infrastructure for upward integration with Tivoli, and a management foundation for the implementation of an on demand architecture.
  • Virtualization Engine Console: The console is based on the IBM Integrated Solutions Console framework to provide a consolidated view for managing the virtualized enterprise resources. The Virtualization Engine Console is used to manage the IT environment by looking at the overall systems beyond operating system boundaries and helping to better maximize resource sharing.
  • IBM Grid Toolbox: This component is based upon the Globus Toolkit V3.0, and its purpose is to create a connection between various combinations of resources to construct a security-rich, robust computing grid infrastructure.

In other words, IBM's Virtualization Engine provides the ability for large customers that have many different operating systems and network components to begin looking at their entire infrastructure as a single entity. By "virtualizing" these resources, administrators can begin to measure how their systems are responding as a whole and then pool and manage these resources according to the workload requirements.

Using Open Standards for Performance Monitoring and Connectivity

IBM's strategy for its Virtualization Engine Suite is to pull together the different components using industry-standard APIs and protocols and to rely heavily upon its own product base of middleware tools. These tools include WebSphere built with J2EE and its Tivoli management suite of products.

The Virtualization Engine also relies heavily upon the technology provided by IBM's Power 5 processors. The Power 5 technology allows as many as 10 virtual partitions to run multiple versions of operating systems simultaneously. These partitions are controlled by a supervising program called a Hypervisor. The Hypervisor supports partitioning and controls the multiple operating system environments. The Power 5 technology also enables virtual local area networks (VLANs) to virtualize the resources available in the physical network. It also allows virtual I/O to enable adapters and other devices to be emulated in memory.

I5/OS Similarities to Virtualization Engine

Meanwhile, if all this sounds suspiciously similar to what is occurring within the i5/OS (OS/400) operating system itself, you shouldn't be too surprised. Of all of IBM's operating systems, i5/OS has traditionally offered more virtual management capabilities than any other operating system in IBM's line of eServers. Many of the concepts of pooled resources, work requests, and resource allocation for performance monitoring are derived from IBM's long experience with both mainframe and OS/400 operating systems. That's one of the reasons that, historically, these mainframe and mini-computer systems have traditionally been more productive than PC server systems.

The difference is that now IBM is extending these same concepts of virtualization across the entire IT infrastructure, piecing together computing resources to provide management services for the entire information system. The Virtualization Engine is offering large organizations the opportunity to construct an i5-like infrastructure--composed of multiple servers, operating systems, and devices--to function as one large, heavily managed and controlled i5 information system.

IBM's Strategies for the Future of Computing Performance

Of course, IBM considers today's Virtualization Engine technologies transitional. IBM's great goal is to move companies toward an on demand infrastructure in which all computing resources are virtualized. This will require increasing the power of processor technology, continuing the development of open standards for grid computing, and building more comprehensive supervisory Hypervisors that can link together and control new computing devices and mechanisms.

When IBM's model for computing has become completely virtualized, according to analysts, our IT organizations will finally enter an era in which all computing resources can be sold as a commodity, like water from a tap. If you want to compute, you turn the spigot to on. If you want more computing power, you turn the spigot harder.

This is IBM's goal for performance as well. Instead of trying to balance the age-old conundrum of performance against capacity, IBM wants to provide an unlimited computing service to its customers, to be billed as it is used. In that light, IBM's Virtualization Engine can be seen as merely the first step in that long road toward its goal of computing resources available on demand.

Editor's Note: To read more about the Virtualization Engine, see "Practical Virtualization."

Thomas M. Stockwell is Editor in Chief of MC Press Online, LP.

Thomas Stockwell

Thomas M. Stockwell is an independent IT analyst and writer. He is the former Editor in Chief of MC Press Online and Midrange Computing magazine and has over 20 years of experience as a programmer, systems engineer, IT director, industry analyst, author, speaker, consultant, and editor.  

 

Tom works from his home in the Napa Valley in California. He can be reached at ITincendiary.com.

 

 

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: