13
Wed, Nov
5 New Articles

READE and READPE Revisited

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Those who forget history are doomed to repeat it, and in software, doomed to reinvent it.

 

READE and READPE are powerful RPG opcodes whose use seems to be dwindling. Part of that may just be that people don't remember them, so I thought I'd revisit them in this article.

Processing a Loop

The situation is pretty simple. You want to process a set of records based on a key. This could be listing all the open order requirements for a given line or all the shipments from a specific warehouse on a given date. At first glance, this is often the domain of SQL and its unmatched ability to do ad hoc queries. To do this with native I/O, you must have built a logical file in the correct sequence; otherwise, you have to read every record in the file. But really, the same problem exists with SQL. An ad hoc query, especially over a large database, can be very slow if there is no corresponding INDEX (the SQL equivalent of a logical file).

 

So while a truly ad hoc query (one without a pre-built access path) is very nice in one-off situations where flexibility matters a lot more than performance, in the case of a task that is performed regularly, you will need an access path whether you're writing SQL or native I/O.

 

So assuming an access path is in place, then what really is the benefit of using READE or READPE over an SQL statement...or over a simple READ for that matter?

 

READ(P)E vs. READ

 

The problem when using a read is that you have to do the key checking yourself. In the case of a simple key, that's not so bad. Let's take a file called ITEMS with a single key field named ICLASS. The key you want to process is in MYCLASS. Your code looks like this:

 

  setll (ICLASS) ITEMS;

  read ITEMS;

  dow (not %eof(ITEMS) and ICLASS = MYCLASS);

    exsr process;

    read ITEMS;

  enddo;

 

First, you use the SETLL command to position your file to the first record that matches your condition. Then you read through the file, checking each record to be sure it still fulfills that condition. This is very easy with one key. But it starts to get ugly with more than one key, and it ends up looking like this:

 

  setll (MYFROMWHS: MYTOWHS: MYITEM: MYDATE) SHIPMENTS;

  read SHIPMENT;

  dow (not %eof(SHIPMENTS)

      and SWHSFROM = MYFROMWHS

      and SWHSTO = MYTOWHS

      and SITEM = MYITEM

      and SSHIPDATE = MYDATE);

    exsr process;

    read SHIPMENT;

  enddo;

 

As you can see, the code can start to add up. However, with READE it's easier to specify the keys:

 

  setll (MYFROMWHS: MYTOWHS: MYITEM: MYDATE) SHIPMENTS;

  reade (MYFROMWHS: MYTOWHS: MYITEM: MYDATE) SHIPMENTS;

  dow (not %eof(SHIPMENTS));

    exsr process;

    reade (MYFROMWHS: MYTOWHS: MYITEM: MYDATE) SHIPMENTS;

  enddo;

 

Note that now you only have to check for end-of-file using the %eof BIF. That's because READE will return end-of-file when it hits a record whose keys don't match the key fields specified on the instruction.

 

And if you find that to be too much typing, you can hearken back to an even older technique, the key list. Although its use is frowned upon as being less "self-documenting" than the code above, it still has its place. A key list is a predefined set of fields used as a key:

 

C     SHIPKEY       KLIST                            

C                   KFLD                    MYFROMWHS

C                   KFLD                    MYTOWHS

C                   KFLD                    MYITEM

C                   KFLD                    MYDATE

 

Then you simply use the key list:

 

  setll SHIPKEY SHIPMENTS;

  reade SHIPKEY SHIPMENTS;

  dow (not %eof(SHIPMENTS));

    exsr process;

    reade SHIPKEY SHIPMENTS;

  enddo;

 

On the positive side, it's a lot less code to type, and that can be an argument itself when you have to use the same key fields in many places in a program. Also, by specifying the keys in one place, you don't have the potential issue of accidentally specifying different keys at two points in the program.

 

On the negative side, if you want to understand what the I/O opcodes are doing, you need to refer to the key list; hopping back and forth between the actual program logic and the key list definition can be counter-productive. It's this lack of localization that has led to the KLIST opcode being generally deprecated by the programming community.

 

All of these same issues apply to the READPE opcode; the only difference is that you use the SETGT opcode rather than SETLL to position the file and then execute READPE to read through the file in reverse sequence. (And remember, reverse sequence is not necessarily descending order; if the access path specifies descending order on a key field, then READPE will read that field in ascending sequence!)

 

READ(P)E vs. SQL

 

This is a tougher issue to present. In most cases where you would use a partial key, an SQL cursor can function just as well. For example, if you're simply using a set of fields to limit the records processed in a given loop, this works just as well:

 

exec sql declare cursor c1 as

  select * from SHIPMENTS where

    SWHSFROM = :MYFROMWHS and

    SWHSTO = :MYTOWHS and

    SITEM = :MYITEM and

    SSHIPDATE = :MYDATE

    order by SWHSFROM, SWHSTO, SITEM, SSHIPDATE;

exec sql open c1;

fetch next from c1 into :SHIPMENTDS;

dow SQLCOD = 0;

  exsr process;

  fetch next from c1 into :SHIPMENTDS;

enddo;

 

The proponents of self-documenting code point out that this code is pretty self-contained; all the information is there, including the order of the records and the selection criteria, which is in contrast to the native I/O environment, where the key information is specified in the logical view and the selection criteria is embedded in the program logic itself. You have to decide for yourself whether that's a better way to program. I'm an old RPG dinosaur, and I grew up with ISAM access, so the fact that the file defines my access is fine by me.

 

Also note that in both situations I've left out error-checking. I'm old-school, and I figure that if I'm getting database I/O errors, it's either because of a bad application design or an honest-to-goodness hardware problem, and in either case, a hard halt is fine with me. If you disagree with that, let me know, and I'll address it in another article. But for now, let's try to keep things simple.

 

Now, a savvy SQL proponent might inquire as to what the "process" routine is actually doing, and if the processing is simple enough, you might be able to circumvent a lot of this by using a set-based SQL statement rather than a cursor. For example, if all you were doing was totaling quantity, you'd do something like this:

 

exec sql select sum(SQUANTITY) into :MYTOTQTY

  from SHIPMENTS where

    SWHSFROM = :MYFROMWHS and

    SWHSTO = :MYTOWHS and

    SITEM = :MYITEM and

    SSHIPDATE = :MYDATE;

 

You'd be done. Not only that, but chances are this would execute much faster than doing it via native I/O. The performance often gets better as you add other files joined in by foreign keys--for example, to calculate unit-of-measure conversions or whatnot--because the I/O tends to stay in the SQL engine and is executed at a lower level than the compiled program.

 

So why wouldn't you use SQL all the time? Well, if the processing is too complex or requires business logic that isn't easily expressed in SQL, you may choose to not use SQL. Or, if the processing involves doing things other than computations, such as writing to other files or calling other programs, then an aggregate SQL statement is usually not the best solution.

 

But that still leaves the SQL cursor approach. Where does this fail? The most typical case where the SQL cursor fails is when you need to break out of the subset. By that, I mean situations where the user wants a set of records based on some criteria but then wants records just outside that boundary. A typical example is the above case, where the last key field is the ship date. The user may want to see a list of records on May 23, but after looking at those records, may want to "page back" and see the data just prior to May 23. With an SQL cursor, the selection criteria identify a hard selection; the cursor does not contain records outside that boundary. In this case, the cursor cannot be positioned to records prior to the selection date. The only recourse is to actually create another cursor, and that can be expensive.

 

How expensive? Expensive enough to make it worthwhile to use traditional indexed access rather than SQL? That depends on the cursor, and in many cases the two techniques may yield very similar results, at which point you have a decision to make. And when comparing two different programming techniques that yield the same results, the intangibles sometimes play a larger part than the actual mechanics. For example, when the two approaches are otherwise equal, I prefer the simplicity of indexed access over the wordiness of SQL. I also prefer RPG over COBOL; that's just the way I am.

 

But before you get to the intangibles, you need to understand the mechanics, and this article has shown you how the approaches compare.

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: