Taking a Snapshot Without Pasting
The October 1999 issue of MC noted how to copy the entire Windows desktop image into the clipboard by pressing Print Screen and then pasting into another application such as Paint (“Zip into the Future,” MC, October 1999). You should also note that, to copy an image of just the active window, you can simply press the key combination Alt+Print Screen. For instructional purposes, it is advantageous to include screens of your AS/400 interactive programs as part of your user documentation. However, in 5250 sessions, Alt+Print Screen maps to “System Request” and therefore prevents you from capturing the active 5250 window. To overcome this, start a temporary 5250 session and delete the keyboard mapping for Alt+Print Screen inside your new 5250 session.
— Chris Ringer
Java I/O from Servlets
One of the benefits of doing record-level I/O on an AS/400 instead of Java Database Connectivity (JDBC) is that the toolbox is smart enough to use direct database access instead of going through TCP/IP Sockets if you are on the AS/400. This is a big potential speed improvement. However, if you are doing toolbox record-level I/O within servlets in WebSphere, you may not see this benefit. The toolbox may still be going through the TCP/IP stack.
Before I explain how to fix the problem, you can see for yourself by turning on the toolbox traces in your servlet. Put something like the code shown in Figure 1 in your servlet prior to your connectService (AS400.RECORDACCESS) statement.
This should create a file called ToolbxTr.log. If you don’t specify a path name, you will most likely get the log in the root of your AS/400 Integrated File System (AS/400 IFS). You may want to specify the file name as something like /home/ToolbxTr.log.
Run your servlet and then look in the trace file. You definitely have the problem if you have a line like this in the log file:
Wed Nov 10 08:27:36 GMT+00:00 1999 Requirements not met to use native
optimizations: UserID does not match local userid
This line means that the toolbox will go through TCP/IP Sockets for your database I/O, even though the servlet is really on the AS/400. The problem is that the toolbox checks
to see whether the user profile you used in making the toolbox connection is the same as the owner of the job running the servlet. For WebSphere, this means that you must connect to the AS/400 by using the QTMHHTTP user profile like so (of course, substituting the correct system name and password):
AS400 host = new AS400(“MYSYSNAME”,”QTMHHTTP”,
”PASSWORD”);
try{host.connectService(AS400.
RECORDACCESS);}
catch(Exception e){}
Now, if you run the same traces on your servlet, you should see two lines like this in the log file:
Wed Nov 10 13:32:55 GMT+00:00 1999 Using native optimizations
Wed Nov 10 13:32:55 GMT+00:00 1999 No service connection necessary for native
service: as-ddm
Make sure that your servlet connects to the AS/400 with the QTMHHTTP user
New Query Performance PTFs
Figure 2 lists a set of PTFs that were recently made available and can have a direct impact on query performance. These new PTFs enable DB2 UDB for AS/400 to create indexes with a large logical page size of 64K. Most AS/400 indexes today have a logical page size of only 4K or 8K. This larger page size makes indexes more attractive to the query optimizer and allows database paging to be more efficient during a query. For example, after applying the PTFs, one customer saw a 40 percent decrease in runtime for a set of 15 queries that the customer frequently ran.
The larger page size is used only on indexes created over tables that are created with SQL. Therefore, database files created with DDS cannot benefit from this larger index page size. The 64K logical page size is used automatically for any index created over an SQL- created table. In addition, indexes created with the Create Logical File (CRTLF) command will not benefit from the larger page sizes.
Since a larger page size is not available for indexes created with an access path size of *MAX4GB, DB2 UDB for AS/400 attempts to rebuild existing indexes over SQL tables
profile.
— Alex Garrison
connectService(AS400.RECORDACCESS) statement:
try
{Trace.setFileName("ToolbxTr.log");}
catch(Exception e){}
Trace.setTraceWarningOn(true);
Trace.setTraceInformationOn(true);
Trace.setTraceErrorOn(true);
Trace.setTraceDiagnosticOn(true);
Trace.setTraceDatastreamOn(true) ;
Trace.setTraceConversionOn(true);
Trace.setTraceOn(true);
Figure 1: Insert this code into your Java applet to speed up your JDBC access.
as *MAX1TB indexes. You should be aware that DB2 UDB also cannot utilize the larger logical page size when an identical *MAX1TB index (built with a smaller page size) already exists over the table. In this case, the database manager has to share the existing index.
After applying the PTFs, the easiest way to utilize the *MAX1TB and 64K logical page sizes in all your indexes and constraints associated with SQL tables is to perform the following steps:
1. Save the collection/library.
2. Delete the collection/library.
3. Restore the collection/library.
Restoration of the collection/library to the original collection/library name causes all indexes associated with SQL tables to be rebuilt with the 64K logical page size. The following Change Logical File (CHGLF) command can also be used to convert one index at a time:
CHGLF FILE(libname/indexnm) FRCRBDAP(*YES)
These query performance PTFs have been made available for V4R2, V4R3, and DB2 UDB’s attempt to automatically switch indexes to the larger page size can cause index rebuilds to occur where they would not have occurred in the past. Your first action after converting your existing indexes to the bigger page size should be to update your database backup media with these objects. Otherwise, a restoration of an old database backup can potentially cause the indexes to be rebuilt with the new attributes. This is definitely a step that you want to avoid when using your database backups in a disaster recovery scenario.
Here are situations where you might experience an index rebuild that previously would not have occurred:
• Alter Table operations on SQL tables where a column is dropped, changed, or added
• Restorations of SQL tables, indexes, logical files, and constraints where save media still contain the older version of the indexes and the restoration process re-creates the indexes with the *MAX1TB and 64K page size attributes (The indexes are re-created only when the tables are re-created because they do not exist prior to the restore.)
• Restorations of SQL tables, indexes, logical files, and constraints where save media still contain the older versions of the indexes and the restoration process attempts to restore them over the older versions
— Kent Milligan PartnerWorld for Developers, AS/400
V4R4.
OS/400 PTFs Version
V4R2 SF56845, SF56846, SF59322, SF59325, MF23307 V4R3 SF56916, SF56917, SF59321, SF59323, SF59324, MF23319 V4R4 SF56911, SF56912, SF59318, SF59319, SF59320, MF23323
Figure 2: These new query-enhancing PTFs are now available.
Keeping DDS Sorted
SDA has an extremely convenient feature: the ability to sort the DDS of a display file by screen position. You access this function from the design work screen by pressing F4 and then F6. SDA then reports that it has sorted the fields.
Keeping DDS sorted helps maintenance programmers when they need to copy code from one member into another or when retrofitting changes done in one member into another. The problem, however, is that SDA also changes the “last maintained” date that each source line contains. That is to say, if you changed a field last week so it now shows in high intensity, you’ll show that day from last week as the day last maintained if you look at the code using SEU. Let SDA sort fields by screen position, however, and all source lines in that record format get today’s date as if they were changed today. Therefore, you lose the ability to track changes.
I needed a better tool than SDA for sorting DDS, and the result was the Sort DDS (SRTDDS) command. You simply give it the name of the source file (qualified with a library name, of course) and the name of the source member, and that’s it. SRTDDS takes care of the rest; it sorts all fields in all record formats in a single stroke (making it more convenient than even SDA, since the latter forces you to press F4 and F6 for each record format). Once done, SRTDDS places the sorted code back into the same source member.
SRTDDS is meant to process only display and printer file DDS, and only if field positioning is always done in absolute numbers. That is, no fields can be positioned by using the “:+number convention” in the row or column number. If SRTDDS finds a plus (+) sign anywhere within the row or column number fields, it aborts the process and sends you an escape message.
The command processing program (CPP), SRTDDSC, first creates a temporary work physical file (SRTDDSW) in QTEMP and a “scratch” source file (SRTDDS) also in QTEMP. RPG IV program SRTDDSR1 is then executed to read the original source member and write the code it finds to SRTDDSW. Then, program SRTDDSR2 reads SRTDDSW sequentially by key (thus reading field definitions by screen position), writing the source code into QTEMP/SRTDDSW. Finally, the new source code is copied back into the original member, replacing its previous contents and changing all sequence numbers to ensure that they are consecutive.
SRTDDS keeps all comments intact and all keywords in the same sequence in which they were originally entered. It also does not alter the order of the record formats. (One small caveat is that comments may change position within the source member.)
I became aware of a few things about DDS while creating this tool—things that no one usually thinks about consciously but that have to be accounted for when writing a program like this. The DDS feature that gave me the most grief was its ability to condition fields and keywords with multiple lines of conditioning indicators. For instance, consider the code in Figure 3. In the upper half, it’s easy to see that the second line also applies to FIELD1, which goes on row 2, column 3. So does the third line; that case is simple. More
complicated, however, is the lower half. When SRTDDSR1 reads its first line (which contains nothing but indicator 41), to which line does it belong? Actually, it belongs to the last line of the group, so indicators 41 and 42 apply to a field located at row 12, column 25.
It’s important to know these things, since each record is going to be sorted independently. The only things that keep related records together are the coordinates (i.e., row and column) of the field to which they apply, so I devised a method to take care of this difficulty. If SRTDDSR1 finds a DDS record that contains nothing but conditioning indicators, it continues reading more records until it finds either a keyword or a field name. At that point, it knows what those indicators condition and can assign the row and column numbers accordingly. Of course, it then has to backtrack the same number of records it advanced in doing its detective work. That’s why SRTDDSR1 contains a loop of READ statements in subroutine GETNEXT. As soon as the keyword or field name is found, it enters a loop of READP statements.
When installing SRTDDS on your system, be sure to install utility command Forward Program Messages (FWDPGMMSG) first, since it is used in CL program SRTDDSC. This command was published in “How to Forward Messages in CL,” MC, January 1998.
The code for this utility can be found at www.midrangecomputing.com/mc/.
— Ernie Malaga Computer Solutions, Inc.
A FIELD1 2 3
A 01 02
AN 15N17 DSPATR(RI)
*
A 41
AO 42
AO 43 FIELD2 12 25
Compilation Instructions:
Install FWDPGMMSG first.
Figure 3: DDS’ use of multiple lines of conditioning indicators can be confusing.
Help with Help Panels
Q: I am trying to reference an existing IBM panel group help text in my new commands. Instead of retyping the User Interface Manager (UIM) for the help screen, I want to include a link to an existing IBM help panel. How do I find the names of existing panel groups so I can reuse them?
— David Andruchuk
A: Try using the Display Command (DSPCMD) command to locate the information you’re looking for. For example, you can use DSPCMD and give it a parameter value of WRKUSRJOB to display command information for the Work with User Jobs (WRKUSRJOB) command. Once the information is displayed, scroll to the second page (see Figure 4). In the middle of the screen, you’ll find the help panel name.
— Vadim Rozen
Figure 4: Use DSPCMD to find the names of help panel groups.
Finding What You Lseek
Q: Is there a POSIX API that can CHAIN (i.e., perform a random access) or use SQL from an RPG program to position the file pointer at a specific spot in a file in the AS/400 Integrated File System (AS/400 IFS)?
— Danny Johnson
A: You can use the Lseek() external procedure to position the file pointer at any point in a file stored in the AS/400 IFS. This is more like the RPG op code Set Greater Than (SETGT) or Set Lower Limit (SETLL) than it is like CHAIN, but it’s as close as you can get with flat files. Lseek() is part of the UNIX-type APIs and follows the POSIX standard. Using Lseek(), you can position the file pointer in a number of ways. For example, if you have a flat file in the AS/400 IFS that has records that are 189 characters long and you want to start reading the file from the second record, you can use Lseek() to accomplish this. You can also read the file from the end or read it backwards, which is much like the ReadP op code in RPG. It works like this.
You open the AS/400 IFS file with the Open() external procedure call and then use the Lseek() external procedure call to position the file pointer. You must specify the OFFSET (i.e., where the next read will begin) and whether to read from the beginning of the file, the current file pointer position, or the end of the file. If the OFFSET contains a positive number when the Lseek() procedure is called, the next file access (e.g., a READ) will move forward through the file. However, if the OFFSET field contains a negative number, the next file access will move backward through the file.
You control whether the pointer is positioned from the beginning, the current position, or the end of the file by the value you place in the WHENCE parameter.
A value of 0 indicates that the OFFSET is relative to the beginning of the file; a value of 1 indicates that the OFFSET is relative to the current file pointer; and a value of 2 indicates that the OFFSET is relative to the end of the file.
Look at the code segment in Figure 5. In the complete code (which you can download from the MC Web site at www.midrangecomputing. com/mc/), I’ve included an Lseek() prototype near the beginning of the program and an SkFile procedure (shown in Figure 5), which uses the Lseek() external procedure call, at the bottom. In this example, I
have a comma-separated file stored in a directory named Shannon in the AS/400 IFS. Each record in this file is 189 characters long, and I want to start reading from the second record. I accomplish this by setting the file pointer OFFSET to 190 (i.e., record length + 1) and setting the WHENCE parameter to 0 to indicate that the OFFSET should start from the beginning of the file. After calling the Lseek() procedure, I set WHENCE to 1 so that, the next time through this procedure, the OFFSET will start relative to the current file position.
Now all I do in the rest of the program is read the record (beginning at the OFFSET located by Lseek()) and then loop until a carriage return is found. At that point, I write the data to a physical file and execute the SkFile procedure call again, positioning the file at the next record. Once I hit the end of the file, I exit the loop, close the AS/400 IFS file, and end the program.
— Shannon O’Donnell Senior Technical Editor
Midrange Computing
** SkFile - Subprocedure To Position to record in file
*P SkFile B Export
D SkFile PI 100A
D Filepath 100A Const
D Offset S 4b 0
D Fil_Pos S 10i 0
D Whence S 10i 0 inz(0)
* Use LSeek to position to Next record (Offset + (record length)) in file)
* The record length is 189. By setting the seek pointer to 190, the seek
* will begin at the second record on first pass, and then it is positioned
* to the begining of next record(s) on each successive pass.
C Eval Offset = Rec_Pos + 190
C Eval Fil_Pos = Lseek(Fp: Offset: Whence )
* Set "Whence" to SEEK_CUR (Seek from current file position)
* After first read of file
C Eval Whence = 1
C Eval Rec_Pos = Offset
C Return Error_Flag
P SkFile E
Figure 5: Here’s part of the code for performing a random access on an AS/400 IFS flat file.
Need Permission?
Suppose that you are in a program that adopts authority and you use the Check Object (CHKOBJ) command with the AUT parameter as follows:
CHKOBJ OBJ(name) OBJTYPE(type) AUT(authority)
You will be checking the authority of the owner of one of the programs in the program stack rather than that of the user running the program. This is probably not what you want. Because the command processing program (CPP) for CHKOBJ has the program setting USEADPAUT(*YES), it may inherit the authority of the calling program rather than that of the user running it. If one of the calling programs adopts QSECOFR authority, checking authority by CHKOBJ is meaningless, as QSECOFR has *ALLOBJ authority. (The April 1999 “Security Patrol” explains this quirk in more detail.)
There are non-IBM-supported commands for checking the user’s authority within a program that adopts authority. If you are still on V3R1, you could create a Check User Authority (CHKUSRAUT) command in the TAATOOL library. (See “From the Toolbox: The Check User Authority [CHKUSRAUT] Command,” MC, January 1994.) However, if you are on a later release, you might find that IBM has erased the source for old TAATOOL utilities during the release upgrade, thus making this utility unavailable to you.
You could purchase the TAA Productivity Tools from Jim Sloan, Inc., which includes the CHKUSRAUT command plus another related command, Retrieve Object Authority (RTVOBJAUT). However, while I think that TAA Productivity Tools is a good investment, your resident Uncle Scrooge might think otherwise.
Thus, I present the CHKUSRAUT command as an alternative. CHKUSRAUT is similar to CHKOBJ but checks user authority rather than program owner authority when used in an adopted-authority program. CHKUSRAUT is faster than TAATOOL’s CHKUSRAUT command (as the latter must create and process OUTFILEs) and is simpler to use than TAA Productivity Tools’ Retrieve Object Authority (RTVOBJAUT), which has many parameters. The main drawback of CHKUSRAUT is that you must remember to change the USEADPAUT setting for the CPP to *NO after compiling or recompiling the program. If you don’t remember to do this, programs using CHKUSRAUT will be very permissive.
To be perfectly rigorous from a security point of view, when using CHKUSRAUT, you should qualify it with the library where it is stored rather than default to *LIBL. Also, when compiling the command, you should specify the library where the CPP is stored rather than default to *LIBL. This will thwart an in-house hacker placing a rogue program higher on the library list than the library containing CHKUSRAUT. Another alternative would be to put CHKUSRAUT’s library in the system portion of the library list.
The source code for this tip can be found on the MC Web site at www.midrangecomputing. com/mc/. The CPP requires the Forward Program Messages (FWDPGMMSG) command featured in “How to Forward Messages in CL” in the January 1998 issue of MC.
— Richard Leitch
No More Tedious MOVE Statements
If you are like me and are tired of coding pages and pages of MOVE statements in your RPG programs, take note. The next time a vendor requires you to send them one file with two or more different record layouts, do not fear. Go ahead and populate those two or three files and then combine them into one using data structures with the keyword EXTNAME. Just give the data structures a name and key in the file name in parentheses after the keyword. (Note: There are no date structure variables to follow the data structure name.) Then, read through each of the files moving the data structure name to the target flat file field name.
You’ve just moved the entire record to a flat file. You didn’t have to predefine data structure variables for each field of each file, and you didn’t have to MOVE every record field to them. Depending on the number of fields involved, you can save 50 to 1,000 lines of code with this technique.
— Steven W. McConnell
Lose Those Blanks
Some folks have problems when FTPing a file from the AS/400 to a PC: It tends to trim blanks from the end of the record. So if you have a record that is 100 characters long and
the longest record is only 85 characters long (followed by 15 blanks), the record length on your PC will be 85.
To fix this, use this command before sending the data to tell the AS/400 not to trim ending blanks:
LOCSITE TRIM 0 LOCSITE TRIM 1
LOCSITE TRIM
The former of these two commands will trim blanks; the latter will display the current value for TRIM.
— Bradley V. Stone
You can also use the following commands:
LATEST COMMENTS
MC Press Online