04
Mon, Nov
5 New Articles

The Correcting Compiler

General
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Back in prehistoric times, (about 15 years ago when I studied computer programming at the State University), we learned programming on an IBM mainframe. Although we could use any of the compilers we wanted, since we were novices, our classes used a version of PL/I, known as PL/C. The great thing about PL/C was that we could make almost any number of errors, and PL/C would gamely try to make a sensible program out of our insults. The idea wasn't to throw a completely nonsensical source program in, but rather, to catch and correct some of the simple errors that the industrial-strength compilers choked on. PL/C caught things like undefined variables, missing semicolons or improper operations.

For those of you wondering what the worth of such a compiler is, the value to us was that we got as much mileage as we could from each compile. If PL/C was able to correct simple errors and produce an executable program, we could then actually run the patched-up program and start debugging with actual output. The theory was that a program probably had any number of bugs to start with, which couldn't come to light until the program was run. Annoying issues like passing compiler syntax checking simply delayed the process of testing the program, so it was better to get something working.

Errors, Errors

In many shops, the situation is similar to those long-ago times. If you are working with many programmers on one machine, you probably submit compiles to batch, then wait your turn. I would guess that many of us also compile programs that have syntax errors. So sometimes we wait for a good while, only to find that we have done something like used a field without defining it, or include or omit an END statement. Those types of errors fall somewhere between syntax errors and logical errors. Syntax errors are things that SEU can catch, such as misspelling an opcode. Logical errors are beyond SEU and sometimes beyond the compiler's ability to catch, for example, setting conditional tests incorrectly. Logical errors are usually only exposed through testing, or through users reporting troubles.

The errors that are in between are the ones that cause problems during the program development phase. I can only speculate that I have seen thousands of 7030 errors ("The field or indicator is not defined."), and I have often wished that the compiler would be smart enough to keep going.

As it turns out, we can make the compiler smarter, in terms of how we want it to perform. The key to this is to identify the compile-time errors that we can agree to accept and to inform the compiler of our choices. This is done by examining the error messages that the compiler uses and changing the message severity level.

For example, if you look at a compile listing that has a 7030 error, you see that the severity level for that message is 30. Since you probably compiled your program with the GENLVL parameter set to its default of 9, your program failed to compile. If you have a listing with a 7030 error, locate the undefined field in the cross reference section. You will see that the compiler contextually determined that the field was an alphameric or a numeric field. It even decided that the alphameric field was four characters long, and the numeric field was five digits with zero decimals.

Since the compiler was willing to go that far, then why couldn't it just use those definitions and at least give us a compiled program? After all, if we were told about the error but given the chance to run the program, we could then save at least one compile, and possibly have the chance to discover some other errors that could be fixed, in addition to the field definitions.

A first impulse might be to compile the program with GENLVL(31), but that might not be a good idea. The problem with that approach is that you allow several hundred other errors to live. The program that would be generated from such a compile might be so difficult to run and test that there would be no value to that approach.

Forbidden Fruit

In some ways, changing the behavior of the compiler seems risque. After all, there must be quite a good reason why the compiler stops when it gets certain errors. For many errors, that is probably true. But for something like field definitions, you might decide that you can temporarily live with the compiler's assumptions.

You change the compiler in a controlled fashion by changing the errors that annoy you. For example, to accept the defaults for fields or indicators, you could use the CHGMSGD command shown in 1, or you could use WRKMSGD and change the message from the display. Be aware that any changes you make will be lost when you update to a new release and must be made again.

You change the compiler in a controlled fashion by changing the errors that annoy you. For example, to accept the defaults for fields or indicators, you could use the CHGMSGD command shown in Figure 1, or you could use WRKMSGD and change the message from the display. Be aware that any changes you make will be lost when you update to a new release and must be made again.

Message QRG7030 is somewhat misleading, in that neither the first or second level message texts tell you that the compiler has made assumptions about the field or indicator. In addition to defining alpha and numeric fields, the compiler will define an indicator, set to an initial value of *OFF, if you change the severity level so that it is lower than the GENLVL that you use. You can see this by changing the message, compiling the program with GENOPT(*LIST) specified and then examining the IRP listing. If you haven't looked at an IRP listing, it may take some getting used to, but you will find your undefined fields and indicators in there, with Declare (DCL) statements. We will be looking at IRP listings in an upcoming article.

You will notice, if you change the severity level of QRG7030, that you still get the error message in the message summary, and the 7030 is printed next to the field name in the cross reference listing. Changing the severity level so that the program compiles does not let you off the hook; if anything, you must be even more diligent about examining the program listing. Even if the compiler assumptions for field lengths are what you want, you should explicitly define the fields so that you won't get the 7030 message next time. You'll want to do this to make your intentions clear, to avoid frightening other programmers, and to make your source capable of compilation on other systems.

That's Not All

It is quite nice to be able to effect such a pleasant change to the compiler. But the 7030 message is not the only change that you should investigate. There are literally hundreds of other messages that you might want to change. The problem is, how to identify those messages?

My first impulse was to simply print the QRPGMSG message file, so I did. This was 27 pages long. I started through with a yellow marker, but this quickly grew tiresome. What I wanted was a list of compiler error messages with severity 20 or 30. However, I didn't want to write the requisite CL and RPG program to print such a list; it was late at night when I was doing this. So using my new-found and tenuous knowledge of REXX, I contrived the program shown in 2 to produce a list of messages that I wanted to consider changing. To make this program AS/400-like, I run the program as a Command Processing Program for a command I defined, Print Severe Messages (PRTSEVMSG). The Command Definition source is shown in 3 on page 86, and the command to create the PRTSEVMSG command is shown in 4 (page 86).

My first impulse was to simply print the QRPGMSG message file, so I did. This was 27 pages long. I started through with a yellow marker, but this quickly grew tiresome. What I wanted was a list of compiler error messages with severity 20 or 30. However, I didn't want to write the requisite CL and RPG program to print such a list; it was late at night when I was doing this. So using my new-found and tenuous knowledge of REXX, I contrived the program shown in Figure 2 to produce a list of messages that I wanted to consider changing. To make this program AS/400-like, I run the program as a Command Processing Program for a command I defined, Print Severe Messages (PRTSEVMSG). The Command Definition source is shown in Figure 3 on page 86, and the command to create the PRTSEVMSG command is shown in Figure 4 (page 86).

If you are unfamiliar with REXX, you'll see by looking at the listing that there is a good deal of CL followed by some incomprehensible looping construct and then, evidently, some character string processing. So I'll explain it to you, then you can try it on your system and modify it to fit your needs.

The first line is a comment, just like in CL. The next two executable lines, starting with parse, are used to break up the qualified message file name passed from the command. The first parse line picks out the library, which is between the literal "MSGF(" and "/". The second parse line picks out the message file name, which follows the "/" and comes before the closing ")". Quite simply, REXX sees what you would see if you entered the command in keyword format, minus the name of the command itself.

The section of CL commands is easy to understand. The commands are enclosed in single quotes, to indicate to REXX that these are not REXX commands, but rather, commands to the external environment, which in this case, is the OS/400 command processor. Notice that the "dspmsgd" command was too long to fit on one line, so I used the REXX continuation character, the ",", to indicate that it continued onto the next line.

The intent of the CL commands, from "dspmsgd" through "dltsplf," is to get a copy of the message file into a database file.

The OVRDBF commands are used to tell REXX where to get its input and where to place its output. In this program, I told REXX that its Standard Input (STDIN) is to come from the database file of messages that was just created, rather than from the terminal. Also, I want Standard Output (STDOUT) to be directed to file QSYSPRT, which is a print file.

The loop construct, "do forever," is not quite as laissez faire as it first seems. The construct is controlled by the two following statements, which direct REXX to retrieve a record from file QTEMP/RCD132 and place the record into the variable "data," and then to test "data." If "data" is null, meaning end of file, then I want to leave the loop. The "do forever," parse and test construct is used because REXX doesn't have an operation to specifically read from a file and indicate end of file, such as the RPG READ opcode.

Assuming that I have a record, I check to see if it's for a message with severity level 20 or 30. The severity number is in positions 11 and 12 of the record; I found that out by looking at QTEMP/RCD132 with DSPPFM. If the record is for a message with those severity levels, I want to do some more testing.

The additional testing that I want to do is somewhat specific to my requirement: I want to get a list of RPG compiler error messages that I might want to change the severity level of. By looking through my first 27-page listing of error messages, I saw that there were some messages in there that referenced punched card processing(!). Since I don't think those will be relevant, I simply scan for the word card in the message. If card is found, I iterate, or go back to the top of the loop and get the next record.

My next test is for specific messages that might be of great interest. These are messages that indicate that the compiler is willing to fix things up. The text of these messages includes words and phrases such as "specification ignored," "statement is inserted," "defaults to," etc. I tell REXX to examine the record, starting at position 14, for any of those phrases. If it finds the phrase, it prints the record, giving me a list of five pages of messages that I might want to change.

Old Favorites

If you run the REXX program and examine the listing, you will see many familiar error messages. These are messages that the compiler knows how to handle; you simply have to indicate that you want it to do some more work for you. For example, message QRG5094 tells you that "END operation entry is missing for a CASxx entry." It then says "an END operation entry is inserted after the CASxx." This is the type of error that can be very hard to check while editing the program; SEU does not catch that as a syntax error. A similar type of error that cannot be caught by SEU is QRG5142, "ELSE operation specified without associated IF operation." In this case, the compiler is willing to ignore the errant ELSE.

The point is, you can look through the list of error messages and pick out those that you want the compiler to allow. Even if you change the severity level of those messages to 00, the compiler will still flag the statements on the compile listing. But by letting you compile the program, you can then test it quicker than compiling again and again to get a clean compile. With a dirty compile you can start testing sooner, and probably use the breakpoint capabilities of OS/400 to control the program.

The bottom line on using a correcting compiler is that you either like the idea or you don't. My wanting such capabilities has nothing to do with laziness and sloppy programming, but rather with the realization that programming errors are a dismal fact of life. The most annoying errors are those that the compiler knows about and can fix. I am perfectly willing to let the compiler do this, so that I can get to the actual testing of a compiled program sooner. Since I'm going to have to compile at least twice in any event, why not be able to actually use the first compile, for something more than just a laundry list of petty errors?


The Correcting Compiler

Figure 1 Changing a compiler error message

 Figure 1: Changing a Compiler Error Message CHGMSGD MSGID(QRG7030) MSGF(QRPG/QRPGMSG) SEV(00) 
The Correcting Compiler

Figure 2 REXX program SEV001RX

 parse arg'MSGF(' Library '/' parse arg '/' msgf ')' 'dspmsgd range(*all) msgf('library'/'msgf')', 'detail(*basic) output(*print)' 'dltf file(qtemp/rcd132)' 'crtpf file(qtemp/rcd132) rcdlen(132)' 'cpysplf file(qpmsgd) tofile(qtemp/rcd132)' 'dltsplf file(qpmsgd) splnbr(*last)' 'ovrdbf file(stdin) tofile(qtemp/rcd132)' 'ovrdbf file(stdout) tofile(qsysprt)' do forever parse linein data if data = '' then leave severity = substr(data,11,2) if severity = 20 | severity = 30 then do if ((pos('card',data,14) > 0) | , (pos('Card',data,14) > 0)) then iterate if ((pos('ignored', data,14) > 0) |, (pos('inserted', data,14) > 0) |, (pos('efualts to', data,14) > 0) |, (pos('Default length', data,14) > 0) |, (pos('Defaults length', data,14) > 0)) then say data end end 
The Correcting Compiler

Figure 3 Command PRTSEVMSG

 /* PRINT COMPILER MESSAGES WITH SEVERITY 20 OR 30 */ PRTSEVMSG: CMD PROMPT('Print compiler severe errors') PARM KWD(MSGF) TYPE(QUAL1) MIN(1) PROMPT('Message + file') QUAL1: QUAL TYPE(*NAME) LEN(10) QUAL TYPE(*NAME) LEN(10) MIN(1) PROMPT('Library') 
The Correcting Compiler

Figure 4 Command to create the PRTSEVMSG command

 Figure 4: Command To Create the PRTSEVMSG Command CRTCMD CMD (xxx/PRTSEVMSG) + PGM(*REXX) + SRCFILE(xxx/QCMDSRC) + REXSRCFILE(xxx/QREXSRC) + REXSRCMBR(SEV001RX) 
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: