23
Sat, Nov
1 New Articles

Use Performance Tools to Avoid Application Performance Surprises

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

When a new software package is added to a system, the amount of DASD needed for the new database files is almost always underestimated. How does this affect performance? When the system runs very high DASD utilization, finding the contiguous disk area to write and read pages from main storage becomes more difficult, forcing the system to manage disk space more aggressively. The direct result is more paging and faulting; the indirect result is worse response times.

Add to that the fact that most shops also underestimate the processor power and memory requirements to run a piece of software and you can see the scope of the problem. The real challenge to us is our ability to predict the effect of the new software on the overall performance of the machine. Fortunately, the AS/400 comes with adequate modeling tools that can help you avoid being blindsided by a new application.

One of the first things that needs to be done before predicting the effects of new software is to ensure the existing hardware and OS/400 configuration is as efficient as possible. Everyone talks about system tuning, but in reality, most AS/400s are poorly tuned. If the system is not set up properly, any new software may degrade it.

When I work on performance problems for a client, the first thing I do is establish what the expectations are. A customer may have two expectations: one for batch and another for interactive processing. With batch processing, the cycle time usually presents the dilemma. The window the AS/400 needs for backup and batch processing may be much smaller than the window available. For interactive processing, the expectation is always the answer to the question, What is fast response time? Some clients can live with three- to five-second response time; others find one-second response time is too long.

Before you can predict the effect of a new software change or upgrade, you need to know what's happening right now on your AS/400. Every AS/400 has the ability to collect performance data.

The Performance Tools/400 Licensed Program Product (program 5716-PT1 @ V3R7M0) is the analysis tool IBM offers to determine the status of the system and allow trend analysis over time. You should consider collecting data during the online day but not when users are usually signing on and signing off. Sign-on and sign-off are two of the most intensive tasks performed by an AS/400, and the data from these times will affect the results of the collection. I usually collect data from 9:00 to 11:00 a.m. and again from 1:30 to 3:30 p.m. You may want to adjust your times to match peak application load times in your environment, but, in the average office, these times will work well.

If you need to collect data for the nightly batch, go ahead and start the performance collection to run overnight, but store the data in a separate collection member. Those who use the performance collection to report things like average response time, total number of jobs, and other generic facts about system performance should consider using the Print History Analysis (PRTQHSTANL) command from Jim Sloan, Inc. This and other tools in the TAA Tools product can report this type of data more easily than the performance tools.

In order to set up the performance collection, you need to access the menu Perform. Use the GO command to get directly to this menu. Shown in Figure 1, this menu will guide you into all of the features contained in Performance Tools/400. Option 2 will take you to a menu that allows the setup of timed data collection. One note: Performance Monitor/400 (PM/400), which is automatically installed on almost all machines, must be stopped before collecting data this way. See the manual Performance Management/400 Offerings and Services, including Performance Management/400-Subset to learn how to stop PM/400 from running.

From the Collect Performance Data panel, select option 3. The menu in Figure 2 will be displayed. In order to add a collection period, place a 1 and the name of the collection in the entry fields and press Enter. The panel in Figure 3 will be presented. Collect data for two hours, include the trace data, and cut the time interval to five minutes. That allows for the most accurate collection of data, and because the collection is only for two hours, the amount of storage needed is kept reasonable.

Once the data is being collected, the system report will provide most of the good trend analysis data. The component report will supply much more detailed information reflected in the system report if that level of detail is needed. From these reports, set up a spreadsheet to graph the average response time, CPU per transaction, logical I/O per transaction, and the percent of total CPU expended on batch vs. online. All this provides you a performance base line for your AS/400.

Now, when an application is added, upgraded, or changed, the effect on the performance of the machine can be measured in great detail. It can also show exactly in what areas an application is affecting performance.

One of the most important things to do in any application performance analysis is to establish a base set of sample business transactions that will be used to model overall performance. A base transaction set should represent how the application is typically used. There will be a mixture of critical transactions that need the best performance and not-so-critical transactions that are for more casual use. The transactions in the set should be ones that are run wholly from within the application.

Once the transaction set has been identified, run a Conference Room Pilot process with them. Taking order entry as the example (because these are usually application-intensive functions), ask the customer service representatives to pick 10 orders that represent the most complex, the average, and the smallest orders. Start the performance collection and have the customer service representative enter them. At the same time, have testers enter more mundane transactions as well. This will give you an excellent base from which to empirically measure any changes in application performance. When you run the performance analysis reports, select the specific jobs running the sample transactions. To get the overall health of the system during the test, use the system report. You will also need the transaction report, but you must have been collecting trace data to produce it. The transaction report has three portions:

1. The summary transaction report gives general job information and is useful for comparisons between tests.

2. The detailed transaction report gives detailed information about each transaction within a job. Transaction response times, I/O requests, and CPU usage are among the types of information available from this report.

3. The transition report provides information that's similar to the transaction report; however, the data is shown for the transitions from state to state rather than when the job is waiting for workstation input. This report helps you determine what was happening when a transition occurred, such as an unsatisfied attempt to lock a record for processing in your program.

A sample transaction report is shown in Figure 4. From this sample report, you can determine the exact amount and type of I/O and the components of the response time. At this point, you should have a base line of performance and a base set of transactions.

When new software is installed, the same techniques to collect the performance information should be followed. Now, you will have real evidence of the change in performance, and you will have the justification for an upgrade to CPU, memory, or DASD, or maybe all three. It's much easier to get approval for your upgrade based on facts rather than opinion. It will also allow you to explain, without recrimination, why your users should not expect better response times without an upgrade.

When you suspect a problem with a new or changed application, these analyses will go a long way toward focusing the programmers' energies to the appropriate section in the source code.

Three tools shipped with Performance Tools/400 are used to predict the future with regard to your system:

1. The performance monitor tells you the status of the system, how past performance has been, and what parts of the transaction make up total response time.

2. Best/1 does modeling based on OS/400 release level, the existing or predicted hardware configuration, and your estimated transaction load.

3. The Performance Explorer is a combination of the Sample Address Monitor (SAM) and the Timing and Paging Statistics (TPST) products.

The classic reports of the performance monitor can be used to predict a trend line and, if the rate of change in the trend line is increased, how soon the system will be unable to support the load placed on it. Key indicators provided by the performance monitor are CPU per transaction, I/O per transaction, and percentages of CPU used in various types of jobs on the system. The performance tools also include two programmer utilities to assist in the analysis of the application. They are the Job Trace utility and the Analyze Process Access Group utilities.

BEST/1 is a true modeling tool. It uses collected performance data, as well as your current configuration. Add to that your own estimate of transaction growth, and you get a powerful picture of where you are and where you're going.

You can play games with the model by changing your release level or AS/400 model to see how your system will perform if you upgrade. There are two levels of BEST/1:

1. Basic provides access to high-level modeling and capacity planning functionality while requiring the user to have only some general knowledge of modeling and capacity planning concepts.

2. Advanced provides access to low-level modeling and capacity planning functionality.

Assuming you've followed this article's advice and faithfully collected performance data, BEST/1 will import that data to use as part of its model. How to make BEST/1 perform its magic is beyond the scope of this article, but you can always check out the book BEST/1 Capacity Planning Tool for more information.

The Performance Explorer is a new twist on some older products. It combines the Timing and Paging Statistics and the Sample Address Monitor into one comprehensive tool used to analyze program performance.

Assuming that you've set base lines, done trend analysis, and so on, you can use this information to justify upgrading your AS/400 or analyzing the performance cost of a new application. You can also use it to zero in on performance problems and fix them.

Performance Tools/400 contains two utilities programmers can use to analyze individual programs. The job trace utility shows the programmer extensive detail about how much and what type of I/O is done, how many wait states occurred, and the names and numbers of programs, both application and system, that were called during the trace period. The most useful information is the number of full database opens, closes, and shared opens and closes done in the trace period.

The analyze file and process access group utilities provide information about the program-to-file relationships and the usage of the files by the programs. This information is useful to find potentially poor usage of the database and to find more efficient database access methods. Use this report to help find the best access method for the database structure and to suggest changes to the structure of the database.

The Analyze Process Access Group (ANZACCGRP) command is used to determine three things:

1. The status of the open operation, full open or shared open

2. How often the files opened in the process access group (PAG) are used and which files may be open but do not have any activity in them

3. The size of the PAG, which can help you determine the amount of main storage memory needed for the application and how to best set up the shared pool the application is routed to This command is most useful when you include a large number of like jobs in the analysis.

If you need more focus, try Performance Explorer. It was introduced at V3R6M0. It is a combination of two formerly standalone products: Sample Address Monitor (SAM) and Timing and Paging Statistics (TPST). Performance Explorer is a data collection tool different from the performance monitor data collection because it is specifically designed to provide detailed information on how specific programs work. Invoke Performance Explorer when you know what you want to look at, either from experience or from the results of performance collection and trend analysis.

Performance Explorer does its own data collection, but it cannot be used to analyze software that has the performance collection disabled. If you want to analyze a vendor's software shipped with performance collection disabled (some are), you will have to get the vendor to supply you with new software that has the collection enabled.

The general flow of the Performance Explorer requires five steps:

1. Create a definition. This informs OS/400 about the programs and processes that you want to study. The command to create the performance explorer definition is Add Performance Explorer Definition (ADDPEXDFN).

2. Start the data collection with the Start Performance Explorer (STRPEX) command.

3. Run the program you want analyzed for as brief a period as possible. Here is where you may want to use just one heavy transaction from your base transaction set.

4. End the collection with command End Performance Explorer (ENDPEX).

5. Print the reports and analyze the data. The reports are run with the Print Performance Explorer Reports (PRTPEXRPT) command.

A neat little program can be created to accomplish all of this (see Figure 5). This program first retrieves the complete job name and the system time. Then, it constructs a reasonably unique definition name and starts Performance Explorer data collection based on the input parameter run for a period of time. The collection ends, and the reports are printed.

Performance Explorer's reports are my favorite because I get great insights about how the program really runs. In the sample report, Figure 6, you can see exactly how many times a module was called, the number of calls the module makes (including calls to IBM-supplied modules), how much I/O is generated, and the amount of CPU that was used. The last column shows the call level of the module. This is very similar to the call stack on the Display Job (DSPJOB) command, but it includes all the modules used, even the lowest-level IBM program. Careful analysis of this report can show exactly where the performance problem is. By using the report printed out by the *PROFILE type, a map of the data collected, you can even tell the

number of times a line of code within your source code was executed and how much CPU was used in the execution of that line of code. This information is available for OPM programs as well as ILE programs. While a detailed look at the Performance Explorer is beyond the scope of this article, you can see that a tremendous amount of information is available to you. (To learn more about Performance Explorer, see "Collect System Data with Performance Explorer," MC, June 1997.)

When you add the new software to your system or perform an upgrade to existing software, the machine's performance often seems to degregate. System administrators may not expect a performance problem and may be surprised by the application's hunger for CPU resources and storage. By using the tools IBM has placed at your disposal, you can find performance problems before they start and predict the future system load. If system administrators join the programming staff in analyzing the proposed software changes, most emergency upgrades and much user frustration can be avoided. Just ask for help from the users and programmers, include a performance review of the changes as part of the software upgrade plan, and execute the plan faithfully. Every time I've seen it done this way, the upgrade winds up a winner and so do the system administrators.

Jim Oberholtzer is a management consultant with New Resources Corporation based in Milwaukee, Wisconsin. He has functioned as an IT director and a senior project director and has held many other titles at all levels of programming and analysis. Jim is recognized as a leader in the AS/400 technical and application engineering fields. He frequently speaks at local user groups meetings and at COMMON on programming, performance, and managerial topics. You may contact Jim by email at joberhol@ execpc.com or by phone at 414-798-7960.

BEST/1 Capacity Planning Tool (SC41-3341-02, CD-ROM QBKALS01)

Performance Management/400 Offerings and Services, including Performance Management/400- Subset (SC41-4347-00, CD-ROM QBJA9000)

Figure 1: Performance Tools/400 main menu




Figure 2: Set up timed performance collection



Use_Performance_Tools_to_Avoid_Application_Performance_07-00.png 825x622




Figure 3: Add performance collection job



Use_Performance_Tools_to_Avoid_Application_Performance_08-00.png 825x622




Figure 4: Sample transaction report



Use_Performance_Tools_to_Avoid_Application_Performance_09-00.png 825x622




Figure 5: This program makes collection of PEX data easy



Use_Performance_Tools_to_Avoid_Application_Performance_10-00.png 825x622

/*==================================================================*/

/* Program COLPEXINF (Collect PEX information) */
/* Authority: No special authority needed. */
/*==================================================================*/

/* To compile: */
/* */
/* CRTCLPGM PGM(XXX/COLPEXINF) SRCFILE(XXX/QCLSRC) */
/* */
/*==================================================================*/

PGM PARM(&DELAY)

DCL VAR(&JOB) TYPE(*CHAR) LEN(10)
DCL VAR(&NBR) TYPE(*CHAR) LEN(6)
DCL VAR(&USER) TYPE(*CHAR) LEN(10)
DCL VAR(&PEXNAME) TYPE(*CHAR) LEN(10)
DCL VAR(&TIME) TYPE(*CHAR) LEN(9)

DCL VAR(&DELAY) TYPE(*CHAR) LEN(3)

RTVJOBA JOB(&JOB) USER(&USER) NBR(&NBR)
RTVSYSVAL SYSVAL(QTIME) RTNVAR(&TIME)
CHGVAR VAR(&PEXNAME) VALUE((%SST(&USER 1 5)) *TCAT +

(%SST(&TIME 5 5)))
ADDPEXDFN DFN(&PEXNAME) DTAORG(*HIER)
STRPEX SSNID(&PEXNAME) DFN(&PEXNAME)
DLYJOB DLY(&DELAY)

ENDPEX SSNID(&PEXNAME)
PRTPEXRPT MBR(&PEXNAME)

ENDPGM

Figure 6: Sample performance collection report

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: