21
Thu, Nov
1 New Articles

Tuning V4R4's Query Optimizer the Easy Way

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

If you don’t normally tune your database queries because the runtime attributes are too hard to locate and set, it may be time to reconsider your decision to avoid tuning. With V4R4, all the performance and runtime attributes that affect how a query runs have been centralized in a new, externally described query options file: QAQQINI. Because these options are stored in a file instead of scattered across several system objects, you can access and set your database query environment more easily. The settings in this file affect the way your system processes any form of query that uses the query optimizer. That includes SQL, Query Management, Open Query File (OPNQRYF), ODBC, Java Database Connectivity (JDBC), Net.Data, and Query/400, as well as third-party query products. The term query is used generically in this article to include all forms of query and SQL.

Rather than use commands to set these attributes, you can now use SQL itself to set your runtime environment. This new capability goes a long way toward simplifying client/server applications because client/server applications can now adjust their runtime environment by using SQL, which is much more accessible to this type of application than a system command, data area, or system value is. However, if your applications currently rely on CL commands, system values, or data areas to set their database runtime environment, you don’t need to make any changes. You just have two ways to set your database access environment because the old commands and values are still supported. Using the old commands has the same effect as updating the new query options file because, in addition to updating the system objects, the old commands now update the query options file. The system also uses triggers over the new query options file to keep values that are stored in system objects synchronized with the new file.

Way Back When...

Prior to V4R4, setting query environment values was a real chore: Several system values, a data area, and various commands could be used to control the way queries were executed. If you create a query options file to centralize your query settings, all your applications that rely on these system values, data areas, or commands will work the same way, provided you set the query options file in QUSRSYS to your current values. What you need to be aware of are the QQRYDEGREE and QQRYTIMLMT system values and the


QQQOPTIONS data area. Read on to find out how these values are set in the new query options file.

Use the Display System Value (DSPSYSVAL) command to determine what your existing system values are and the Display Data Area (DSPDTAARA) command to see what is specified in the QQQOPTIONS data area. It is likely that you do not have a QQQOPTIONS data area in QUSRSYS, because this data area was made available via a PTF on V4R3.

What Are the Options?

As of V4R4, the query options file controls nine query environmental attributes used by the Query Optimizer. These environmental attributes and the values they may have are shown in Figure 1. There is one record for each of these attributes in the query options file. Each record in the query options file contains the name of the attribute as well as its current value, and the Query Optimizer uses these values to decide how a query should be run. Text describing the setting can also be specified. The query options file is externally defined with three fields, QQPARM, QQVAL, and QQTEXT. When a query is run, it is unlikely that all the values will be used. Some values affect only remote, multiprocessor, and distributed system queries; others affect only queries such as those that contain references to user-defined functions or parameter markers.

In addition to the query values shown, each option supports the special value *DEFAULT. Specifying *DEFAULT is the same as using the default value shown in Figure 1. A more complete description of these parameters, as well as a set of instructions for optimizing how queries are run, can be found in the DB2 for AS/400 SQL Programming manual.

There are three options you will most likely be concerned with when a query is not performing as well as you would like: FORCE_JOIN_ORDER, OPTIMIZE_STATISTIC_LIMITATION, and PARALLEL_DEGREE. If your query involves distributed files, APPLY_REMOTE and ASYNC_JOB_USAGE may also affect how quickly your queries are completed. The parameter to pay special attention to on long- running queries is FORCE_JOIN_ORDER. The Query Optimizer was changed on V4R4 to no longer force the join order of files when the JOIN keyword is used on FROM. If you have queries that run slower on V4R4 and that use JOIN, you should try setting the FORCE_JOIN_ORDER value to *YES. For further information on optimizing queries, look at the DB2 for AS/400 SQL Programming guide or go to IBM’s Partners in Education Web site and go through the Internet-based tutorial titled DB2 UDB for SQL Performance & Query Optimization. Find this tutorial at www.as400.ibm.com/developer/education/ibcs.html under Business Intelligence.

Using a Query Options File

The first step toward quicker queries is to create your own query options file. Because the query options file needs triggers to operate properly, you should use the Create Duplicate Object (CRTDUPOBJ) command, which duplicates trigger information, to create query options files. Enter the CRTDUPOBJ command from a command line, specifying the library that will be used in the TOLIB parameter as follows:

CRTDUPOBJ OBJ(QAQQINI) +

OBJTYPE(*FILE) +

FROMLIB(QSYS) +

TOLIB(xxx) +

DATA(*YES)

Once you have created a new query options file, use the Change Query Attributes (CHGQRYA) command to set and enable your option values.


Setting Your Options

Once you have created a query options file and set your job to use it, specify your option values. You can use either a program or SQL to set the option values. The trigger on the query options file will prevent you from specifying an invalid value. To update a value by using SQL, set the QQVAL field to your value, with QQPARM equal to the name of the value you are updating. For example, the following statements will set your query file in QTEMP to use all default values except the query time limit, which will be set to 30 seconds:

UPDATE QTEMP/QAQQINI SET QQVAL = ‘*DEFAULT’
UPDATE QTEMP/QAQQINI SET QQVAL = ‘30’ WHERE QQPARM = ‘QUERY_TIME_LIMIT’

If you are not sure that the value you are setting is in the query options file, use the SQL Delete and Insert statements like this:

DELETE FROM QTEMP/QAQQINI WHERE QQPARM = ‘QUERY_TIME_LIMIT’
INSERT INTO QTEMP/QAQQINI (QQPARM, QQVAL) VALUES(‘QUERY_TIME_LIMIT’, ‘30’)

We’re Off to See the Wizard!

Another way to create a query options file and set its values is to use the Custom Query Options Builder Web page at www.as400. ibm.com/developer/bi/tuner.html. Here, you will find the IBM rendition of a wizard. This wizard guides you through the steps to generate an SQL script that will set the values in your query options file. In this wizard, you specify both the library that will contain your query options file and the values that will be used to set each of the query options. After you have chosen your values, press the Build button at the bottom of the screen, and a script that reflects your choices will be generated and displayed in a new window.

The easiest way to run the script is to use Operations Navigator. Start Operations Navigator, expand the options by clicking on the plus (+) key for your system, and right- click on the Database tab. Then, select “Run SQL Scripts...” from the displayed options. A window that allows you to run SQL scripts will appear. The script generated by the wizard uses the AS/400 naming convention, so you will want to set the naming convention in Operations Navigator to *SYS. To set the naming convention, select ODBC Setup from the Connections drop-down menu at the top of the window. The ODBC Setup dialog will appear. Select the Format tab and set the naming convention to *SYS. Operations Navigator retains this setting, so you will have to perform this step only once. You are now ready to run the script. Cut and paste the script into the “Run SQL Scripts...” window, and then select All from the Run drop-down menu at the top of the window.

If you do not use Operations Navigator, you have several other options. You can cut and paste the script statements into an interactive SQL session and run them, or you can place the statements into a source member and run them by using the Run SQL Statement (RUNSQLSTM) command.

Queries That Run Like a Cheetah

There are many ways to use the query options file, but one strategy you may consider is to set up a query options file with the most commonly used default values in QUSRSYS. Grant your client operational, management, read, and execute rights. Jobs that do not use the CHGQRYA command to set the query options file library will use the file in QUSRSYS.

Once you have created your default query options file in QUSRSYS, you need to set the values in this file. As a starting point, you can specify *DEFAULT for each value. Depending on what you want your default strategy to be, you can set


QUERY_TIME_LIMIT to *NOMAX or a value such as 30, which will prevent long- running queries. If you use the *NOMAX default strategy, you can create a duplicate of the options file in QTEMP in the initial program for interactive jobs. Then, run the CHGQRYA command to point to the query options file in QTEMP. Next, update QUERY_TIME_LIMIT to an appropriate value for your interactive jobs.

If you want to be heavier-handed, you may want to prevent all long-running queries, unless the job from which they are being run changes the QUERY_TIME_LIMIT value. In this case, you would create a duplicate of the query options file in QTEMP in the jobs where you would like to allow long-running queries. In those jobs, set QUERY_TIME_LIMIT to either an appropriate value or *NOMAX.

Running the CHGQRYA command for queries initiated on the AS/400 is no problem, but if you are using the query options file in a client/server environment, running this extra command presents a challenge. To address this challenge, you will need to use another of SQL’s newer features: the ability to call programs in a procedure.

To create and set the query options file from SQL, I have created two procedures: Create Query Options File (CRTQRYOPTF) and Set Query Options File (SETQRYOPTF). There are two types of procedures: external procedures and SQL procedures. Creating an SQL procedure requires fewer steps, but the ILE C compiler is required. To get around this limitation, I have also created CL versions of these procedures. (You can download either the SQL source or the CL source for these procedures at www.midrangecomputing.com/mc.) These procedures execute the CHGQRYA command to set the query options file to a library that is passed as a parameter by using an SQL Call statement.

To call the procedures, embed SQL Call statements in your applications. The statements to call these procedures are CALL objlib/CRTQRYOPTF (‘QSYS’, ‘qryoptlib’) and CALL objlib/SETQRYOPTF (‘qryoptlib’). Replace qryoptlib with your query options file’s library. Instructions for creating these procedures can be found in the heading of each procedure’s source.

To override query attributes for a job, create a duplicate of the options file from QUSRSYS to QTEMP and then make that file active by using the CRTQRYOPTF and SETQRYOPTF procedures. Next, set your overridden query options by using the SQL Update statement.

Query Curiosities

When I first used QAQQINI in QSYS, I found that the file did not contain a record for the FORCE_JOIN_ORDER key. The absence of a value is treated the same way as a value that has *DEFAULT specified, so this did not affect any queries.

I also found that setting MESSAGES_DEBUG to *YES did not have the same effect as running a query in debug. Instead of receiving “all the Query Optimizer debug messages that would normally be issued in debug” as the manual indicated, I received a single new message saying to run the query in debug mode for performance information. I found that setting the value to either *YES or *NO caused this message to be sent but had no effect on debug messages.

What Does This All Mean?

Optimizing how a query runs is still not trivial, but centralizing all the optimization and runtime environment values into a single file sure makes optimizing queries a lot easier. As you begin to use the query options file, you will probably find that optimizing queries is not that hard. If you use queries in your client/server applications, you now have an easy way to improve the performance of those applications.

The query options file also makes it easier for IBM to change the Query Optimizer. To support optimizer changes, IBM can add new records to the query options file and


doesn’t have to worry about changing commands or adding new system values. This will make it more likely that IBM will release optimizer changes more quickly as PTFs.

The query options file is a new feature that still has a few minor problems, but if you have been stymied in the past by the overwhelming set of commands and options required to tune a query, you will welcome the simplicity of this new query options file. If you have been waiting for a better way to improve the performance of your database queries, there is no better time to start than now.

References and Related Materials

• DB2 for AS/400 SQL Programming (SC41-5611-03, CD-ROM QB3AQ803)
• DB2 for AS/400 SQL Reference (SC41-5612-03, CD-ROM QB3AQ903)
• IBM Custom Query Options Builder Web page: www.as400.ibm.com/developer/bi/tuner.html
• IBM Internet-Based Tutorials Web site: www.as400.ibm.com/developer/education/ibcs.html

Value Name Value and Effect (Default Value Listed First)

APPLY_REMOTE *NO—Local query options not used for remote queries

*YES—Local query options used for remote queries ASYNC_JOB_USAGE *LOCAL—Asynchronous jobs possibly used for queries over local tables

*ANY—Asynchronous jobs possibly used by any query *DIST—Asynchronous jobs possibly used for queries over distributed tables *NONE—No asynchronous jobs used
FORCE_JOIN_ORDER *NO—Tables joined based on table statistics

*YES—Tables joined in the order in which they are specified MESSAGES_DEBUG *NO—Optimizer messages not sent to the job log

*YES—Optimizer messages sent to the job log OPTIMIZE_STATISTIC_LIMITATION *DEFAULT—Amount of time spent gathering optimization statistics determined by Query Optimizer

*MAX_NUMBER_OF_RECORDS_ALLOWED (1-2147352578)—Tables with fewer than the specified number of records analyzed by the optimizer (larger tables use default values)
*NONE—Statistics not gathered; default values used
*PERCENTAGE (1-99)—Specified representative percentage of index used to gather optimizer statistics PARALLEL_DEGREE *SYSVAL—QQRYDEGREE system value used

*IO—Parallel processing not restricted *MAX—Optimizer assumes that all system resources are available when determining whether parallel processing is used
*NONE—No parallel processing
*OPTIMIZE—Job’s share of available system resources used to decide whether parallel processing is used PARAMETER_MARKER_CONVERSION *YES—Literal values can be used as parameter markers

*NO—Literal values not used as parameter markers QUERY_TIME_LIMIT *SYSVAL—QQRYTIMLMT system value used

*NOMAX—No maximum time limit imposed 0-2147352578—Maximum number of seconds that a query can run as determined by Query Optimizer UDF_TIME_OUT *DEFAULT—Amount of time for user-defined functions to complete determined by the system

*MAX—User-defined functions allowed to run for maximum allowable time 1-999—Maximum number of seconds that a user-defined function can run


Figure 1: Here’s a summary of the various values for the query options file.


DAVID MORRIS
David Morris has worked with and written about a variety of technologies, including ILE, RPG, business intelligence, SQL, security, and genetic programming. Today, David is developing Web applications that run on the iSeries using RPG, Java, and XML as well as writing about these technologies for technical journals.

MC Press books written by David Morris available now on the MC Press Bookstore.

 

XML for eServer i5 and iSeries XML for eServer i5 and iSeries

In this book, you will learn about Extensible Markup Language (XML), but with an IBM eServer i5/iSeries twist.

List Price $64.95
Now On Sale
 
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: